Test Report: KVM_Linux_crio 19345

                    
                      418bbe9cf4ce8ef71c806703730b1f6a2265d8b5:2024-07-29:35554
                    
                

Test fail (30/326)

Order failed test Duration
43 TestAddons/parallel/Ingress 151.81
45 TestAddons/parallel/MetricsServer 362.22
54 TestAddons/StoppedEnableDisable 154.3
124 TestFunctional/parallel/MountCmd/any-port 242.69
173 TestMultiControlPlane/serial/StopSecondaryNode 141.87
175 TestMultiControlPlane/serial/RestartSecondaryNode 58.51
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 357.49
180 TestMultiControlPlane/serial/StopCluster 141.67
240 TestMultiNode/serial/RestartKeepsNodes 322.09
242 TestMultiNode/serial/StopMultiNode 141.37
249 TestPreload 269.17
257 TestKubernetesUpgrade 404
329 TestStartStop/group/old-k8s-version/serial/FirstStart 283.24
354 TestStartStop/group/no-preload/serial/Stop 139.07
357 TestStartStop/group/embed-certs/serial/Stop 139.1
360 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.11
361 TestStartStop/group/old-k8s-version/serial/DeployApp 0.51
362 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 95.18
363 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
365 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
371 TestStartStop/group/old-k8s-version/serial/SecondStart 762.1
372 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.03
373 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.02
374 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.21
375 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.26
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 458.98
377 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 375.65
378 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 315.12
379 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 116.53
x
+
TestAddons/parallel/Ingress (151.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-433102 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-433102 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-433102 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bba16d61-afc5-4c02-85a7-8e1181099d91] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bba16d61-afc5-4c02-85a7-8e1181099d91] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003231879s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-433102 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-433102 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.664308892s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-433102 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-433102 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.73
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-433102 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-433102 addons disable ingress-dns --alsologtostderr -v=1: (1.495044292s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-433102 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-433102 addons disable ingress --alsologtostderr -v=1: (7.694082722s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-433102 -n addons-433102
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-433102 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-433102 logs -n 25: (1.192053255s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-090253                                                                     | download-only-090253 | jenkins | v1.33.1 | 29 Jul 24 16:56 UTC | 29 Jul 24 16:56 UTC |
	| delete  | -p download-only-254884                                                                     | download-only-254884 | jenkins | v1.33.1 | 29 Jul 24 16:56 UTC | 29 Jul 24 16:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-601375 | jenkins | v1.33.1 | 29 Jul 24 16:56 UTC |                     |
	|         | binary-mirror-601375                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35651                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-601375                                                                     | binary-mirror-601375 | jenkins | v1.33.1 | 29 Jul 24 16:56 UTC | 29 Jul 24 16:56 UTC |
	| addons  | disable dashboard -p                                                                        | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:56 UTC |                     |
	|         | addons-433102                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:56 UTC |                     |
	|         | addons-433102                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-433102 --wait=true                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:56 UTC | 29 Jul 24 16:58 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-433102 addons disable                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:58 UTC | 29 Jul 24 16:58 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | addons-433102                                                                               |                      |         |         |                     |                     |
	| ip      | addons-433102 ip                                                                            | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	| addons  | addons-433102 addons disable                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-433102 addons disable                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-433102 addons disable                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-433102 ssh cat                                                                       | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | /opt/local-path-provisioner/pvc-b5b14fe5-d708-427a-a913-c11d781bebaf_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-433102 addons disable                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 17:00 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | -p addons-433102                                                                            |                      |         |         |                     |                     |
	| addons  | addons-433102 addons                                                                        | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-433102 ssh curl -s                                                                   | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-433102 addons                                                                        | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | addons-433102                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | -p addons-433102                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-433102 addons disable                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 17:00 UTC | 29 Jul 24 17:00 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-433102 ip                                                                            | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 17:01 UTC | 29 Jul 24 17:01 UTC |
	| addons  | addons-433102 addons disable                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 17:01 UTC | 29 Jul 24 17:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-433102 addons disable                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 17:02 UTC | 29 Jul 24 17:02 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 16:56:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 16:56:12.537777   19312 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:56:12.538015   19312 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:56:12.538024   19312 out.go:304] Setting ErrFile to fd 2...
	I0729 16:56:12.538028   19312 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:56:12.538238   19312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 16:56:12.538849   19312 out.go:298] Setting JSON to false
	I0729 16:56:12.539664   19312 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2324,"bootTime":1722269848,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 16:56:12.539717   19312 start.go:139] virtualization: kvm guest
	I0729 16:56:12.541718   19312 out.go:177] * [addons-433102] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 16:56:12.542895   19312 notify.go:220] Checking for updates...
	I0729 16:56:12.542944   19312 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 16:56:12.544196   19312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:56:12.545438   19312 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 16:56:12.546652   19312 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 16:56:12.547719   19312 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 16:56:12.549071   19312 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:56:12.550219   19312 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:56:12.581032   19312 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 16:56:12.582227   19312 start.go:297] selected driver: kvm2
	I0729 16:56:12.582243   19312 start.go:901] validating driver "kvm2" against <nil>
	I0729 16:56:12.582254   19312 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:56:12.583060   19312 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:56:12.583149   19312 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 16:56:12.598402   19312 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 16:56:12.598471   19312 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:56:12.598672   19312 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:56:12.598725   19312 cni.go:84] Creating CNI manager for ""
	I0729 16:56:12.598738   19312 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 16:56:12.598749   19312 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:56:12.598807   19312 start.go:340] cluster config:
	{Name:addons-433102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-433102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:56:12.598908   19312 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:56:12.600747   19312 out.go:177] * Starting "addons-433102" primary control-plane node in "addons-433102" cluster
	I0729 16:56:12.601910   19312 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 16:56:12.601939   19312 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 16:56:12.601947   19312 cache.go:56] Caching tarball of preloaded images
	I0729 16:56:12.602010   19312 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 16:56:12.602022   19312 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 16:56:12.602308   19312 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/config.json ...
	I0729 16:56:12.602326   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/config.json: {Name:mk66c10df021b2afa4711063c3ac523ffeb47dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:12.602484   19312 start.go:360] acquireMachinesLock for addons-433102: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:56:12.602546   19312 start.go:364] duration metric: took 44.417µs to acquireMachinesLock for "addons-433102"
	I0729 16:56:12.602568   19312 start.go:93] Provisioning new machine with config: &{Name:addons-433102 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-433102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 16:56:12.602628   19312 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 16:56:12.604389   19312 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 16:56:12.604498   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:56:12.604539   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:56:12.618272   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
	I0729 16:56:12.618673   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:56:12.619205   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:56:12.619220   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:56:12.619540   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:56:12.619745   19312 main.go:141] libmachine: (addons-433102) Calling .GetMachineName
	I0729 16:56:12.619875   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:56:12.620037   19312 start.go:159] libmachine.API.Create for "addons-433102" (driver="kvm2")
	I0729 16:56:12.620061   19312 client.go:168] LocalClient.Create starting
	I0729 16:56:12.620093   19312 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem
	I0729 16:56:12.832657   19312 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem
	I0729 16:56:12.988338   19312 main.go:141] libmachine: Running pre-create checks...
	I0729 16:56:12.988362   19312 main.go:141] libmachine: (addons-433102) Calling .PreCreateCheck
	I0729 16:56:12.988873   19312 main.go:141] libmachine: (addons-433102) Calling .GetConfigRaw
	I0729 16:56:12.989360   19312 main.go:141] libmachine: Creating machine...
	I0729 16:56:12.989375   19312 main.go:141] libmachine: (addons-433102) Calling .Create
	I0729 16:56:12.989557   19312 main.go:141] libmachine: (addons-433102) Creating KVM machine...
	I0729 16:56:12.990770   19312 main.go:141] libmachine: (addons-433102) DBG | found existing default KVM network
	I0729 16:56:12.991463   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:12.991317   19335 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0729 16:56:12.991481   19312 main.go:141] libmachine: (addons-433102) DBG | created network xml: 
	I0729 16:56:12.991525   19312 main.go:141] libmachine: (addons-433102) DBG | <network>
	I0729 16:56:12.991554   19312 main.go:141] libmachine: (addons-433102) DBG |   <name>mk-addons-433102</name>
	I0729 16:56:12.991568   19312 main.go:141] libmachine: (addons-433102) DBG |   <dns enable='no'/>
	I0729 16:56:12.991579   19312 main.go:141] libmachine: (addons-433102) DBG |   
	I0729 16:56:12.991595   19312 main.go:141] libmachine: (addons-433102) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 16:56:12.991606   19312 main.go:141] libmachine: (addons-433102) DBG |     <dhcp>
	I0729 16:56:12.991616   19312 main.go:141] libmachine: (addons-433102) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 16:56:12.991625   19312 main.go:141] libmachine: (addons-433102) DBG |     </dhcp>
	I0729 16:56:12.991636   19312 main.go:141] libmachine: (addons-433102) DBG |   </ip>
	I0729 16:56:12.991646   19312 main.go:141] libmachine: (addons-433102) DBG |   
	I0729 16:56:12.991672   19312 main.go:141] libmachine: (addons-433102) DBG | </network>
	I0729 16:56:12.991699   19312 main.go:141] libmachine: (addons-433102) DBG | 
	I0729 16:56:12.996884   19312 main.go:141] libmachine: (addons-433102) DBG | trying to create private KVM network mk-addons-433102 192.168.39.0/24...
	I0729 16:56:13.058076   19312 main.go:141] libmachine: (addons-433102) DBG | private KVM network mk-addons-433102 192.168.39.0/24 created
	I0729 16:56:13.058096   19312 main.go:141] libmachine: (addons-433102) Setting up store path in /home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102 ...
	I0729 16:56:13.058108   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:13.058053   19335 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 16:56:13.058176   19312 main.go:141] libmachine: (addons-433102) Building disk image from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 16:56:13.058216   19312 main.go:141] libmachine: (addons-433102) Downloading /home/jenkins/minikube-integration/19345-11206/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 16:56:13.312991   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:13.312850   19335 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa...
	I0729 16:56:13.604795   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:13.604672   19335 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/addons-433102.rawdisk...
	I0729 16:56:13.604817   19312 main.go:141] libmachine: (addons-433102) DBG | Writing magic tar header
	I0729 16:56:13.604826   19312 main.go:141] libmachine: (addons-433102) DBG | Writing SSH key tar header
	I0729 16:56:13.604834   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:13.604781   19335 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102 ...
	I0729 16:56:13.604898   19312 main.go:141] libmachine: (addons-433102) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102
	I0729 16:56:13.604926   19312 main.go:141] libmachine: (addons-433102) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines
	I0729 16:56:13.604943   19312 main.go:141] libmachine: (addons-433102) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102 (perms=drwx------)
	I0729 16:56:13.604960   19312 main.go:141] libmachine: (addons-433102) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 16:56:13.604971   19312 main.go:141] libmachine: (addons-433102) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines (perms=drwxr-xr-x)
	I0729 16:56:13.604983   19312 main.go:141] libmachine: (addons-433102) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube (perms=drwxr-xr-x)
	I0729 16:56:13.604990   19312 main.go:141] libmachine: (addons-433102) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206 (perms=drwxrwxr-x)
	I0729 16:56:13.604998   19312 main.go:141] libmachine: (addons-433102) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 16:56:13.605004   19312 main.go:141] libmachine: (addons-433102) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 16:56:13.605010   19312 main.go:141] libmachine: (addons-433102) Creating domain...
	I0729 16:56:13.605041   19312 main.go:141] libmachine: (addons-433102) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206
	I0729 16:56:13.605062   19312 main.go:141] libmachine: (addons-433102) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 16:56:13.605076   19312 main.go:141] libmachine: (addons-433102) DBG | Checking permissions on dir: /home/jenkins
	I0729 16:56:13.605091   19312 main.go:141] libmachine: (addons-433102) DBG | Checking permissions on dir: /home
	I0729 16:56:13.605103   19312 main.go:141] libmachine: (addons-433102) DBG | Skipping /home - not owner
	I0729 16:56:13.605992   19312 main.go:141] libmachine: (addons-433102) define libvirt domain using xml: 
	I0729 16:56:13.606019   19312 main.go:141] libmachine: (addons-433102) <domain type='kvm'>
	I0729 16:56:13.606027   19312 main.go:141] libmachine: (addons-433102)   <name>addons-433102</name>
	I0729 16:56:13.606033   19312 main.go:141] libmachine: (addons-433102)   <memory unit='MiB'>4000</memory>
	I0729 16:56:13.606038   19312 main.go:141] libmachine: (addons-433102)   <vcpu>2</vcpu>
	I0729 16:56:13.606043   19312 main.go:141] libmachine: (addons-433102)   <features>
	I0729 16:56:13.606049   19312 main.go:141] libmachine: (addons-433102)     <acpi/>
	I0729 16:56:13.606054   19312 main.go:141] libmachine: (addons-433102)     <apic/>
	I0729 16:56:13.606059   19312 main.go:141] libmachine: (addons-433102)     <pae/>
	I0729 16:56:13.606068   19312 main.go:141] libmachine: (addons-433102)     
	I0729 16:56:13.606079   19312 main.go:141] libmachine: (addons-433102)   </features>
	I0729 16:56:13.606090   19312 main.go:141] libmachine: (addons-433102)   <cpu mode='host-passthrough'>
	I0729 16:56:13.606100   19312 main.go:141] libmachine: (addons-433102)   
	I0729 16:56:13.606109   19312 main.go:141] libmachine: (addons-433102)   </cpu>
	I0729 16:56:13.606118   19312 main.go:141] libmachine: (addons-433102)   <os>
	I0729 16:56:13.606130   19312 main.go:141] libmachine: (addons-433102)     <type>hvm</type>
	I0729 16:56:13.606139   19312 main.go:141] libmachine: (addons-433102)     <boot dev='cdrom'/>
	I0729 16:56:13.606144   19312 main.go:141] libmachine: (addons-433102)     <boot dev='hd'/>
	I0729 16:56:13.606153   19312 main.go:141] libmachine: (addons-433102)     <bootmenu enable='no'/>
	I0729 16:56:13.606162   19312 main.go:141] libmachine: (addons-433102)   </os>
	I0729 16:56:13.606174   19312 main.go:141] libmachine: (addons-433102)   <devices>
	I0729 16:56:13.606185   19312 main.go:141] libmachine: (addons-433102)     <disk type='file' device='cdrom'>
	I0729 16:56:13.606200   19312 main.go:141] libmachine: (addons-433102)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/boot2docker.iso'/>
	I0729 16:56:13.606212   19312 main.go:141] libmachine: (addons-433102)       <target dev='hdc' bus='scsi'/>
	I0729 16:56:13.606238   19312 main.go:141] libmachine: (addons-433102)       <readonly/>
	I0729 16:56:13.606256   19312 main.go:141] libmachine: (addons-433102)     </disk>
	I0729 16:56:13.606266   19312 main.go:141] libmachine: (addons-433102)     <disk type='file' device='disk'>
	I0729 16:56:13.606275   19312 main.go:141] libmachine: (addons-433102)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 16:56:13.606295   19312 main.go:141] libmachine: (addons-433102)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/addons-433102.rawdisk'/>
	I0729 16:56:13.606303   19312 main.go:141] libmachine: (addons-433102)       <target dev='hda' bus='virtio'/>
	I0729 16:56:13.606309   19312 main.go:141] libmachine: (addons-433102)     </disk>
	I0729 16:56:13.606316   19312 main.go:141] libmachine: (addons-433102)     <interface type='network'>
	I0729 16:56:13.606335   19312 main.go:141] libmachine: (addons-433102)       <source network='mk-addons-433102'/>
	I0729 16:56:13.606354   19312 main.go:141] libmachine: (addons-433102)       <model type='virtio'/>
	I0729 16:56:13.606382   19312 main.go:141] libmachine: (addons-433102)     </interface>
	I0729 16:56:13.606412   19312 main.go:141] libmachine: (addons-433102)     <interface type='network'>
	I0729 16:56:13.606426   19312 main.go:141] libmachine: (addons-433102)       <source network='default'/>
	I0729 16:56:13.606436   19312 main.go:141] libmachine: (addons-433102)       <model type='virtio'/>
	I0729 16:56:13.606447   19312 main.go:141] libmachine: (addons-433102)     </interface>
	I0729 16:56:13.606457   19312 main.go:141] libmachine: (addons-433102)     <serial type='pty'>
	I0729 16:56:13.606469   19312 main.go:141] libmachine: (addons-433102)       <target port='0'/>
	I0729 16:56:13.606482   19312 main.go:141] libmachine: (addons-433102)     </serial>
	I0729 16:56:13.606494   19312 main.go:141] libmachine: (addons-433102)     <console type='pty'>
	I0729 16:56:13.606505   19312 main.go:141] libmachine: (addons-433102)       <target type='serial' port='0'/>
	I0729 16:56:13.606516   19312 main.go:141] libmachine: (addons-433102)     </console>
	I0729 16:56:13.606526   19312 main.go:141] libmachine: (addons-433102)     <rng model='virtio'>
	I0729 16:56:13.606540   19312 main.go:141] libmachine: (addons-433102)       <backend model='random'>/dev/random</backend>
	I0729 16:56:13.606554   19312 main.go:141] libmachine: (addons-433102)     </rng>
	I0729 16:56:13.606565   19312 main.go:141] libmachine: (addons-433102)     
	I0729 16:56:13.606575   19312 main.go:141] libmachine: (addons-433102)     
	I0729 16:56:13.606586   19312 main.go:141] libmachine: (addons-433102)   </devices>
	I0729 16:56:13.606595   19312 main.go:141] libmachine: (addons-433102) </domain>
	I0729 16:56:13.606608   19312 main.go:141] libmachine: (addons-433102) 
	I0729 16:56:13.613032   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:c5:e6:5e in network default
	I0729 16:56:13.613640   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:13.613678   19312 main.go:141] libmachine: (addons-433102) Ensuring networks are active...
	I0729 16:56:13.614340   19312 main.go:141] libmachine: (addons-433102) Ensuring network default is active
	I0729 16:56:13.614621   19312 main.go:141] libmachine: (addons-433102) Ensuring network mk-addons-433102 is active
	I0729 16:56:13.615116   19312 main.go:141] libmachine: (addons-433102) Getting domain xml...
	I0729 16:56:13.615767   19312 main.go:141] libmachine: (addons-433102) Creating domain...
	I0729 16:56:14.844379   19312 main.go:141] libmachine: (addons-433102) Waiting to get IP...
	I0729 16:56:14.845065   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:14.845426   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:14.845461   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:14.845411   19335 retry.go:31] will retry after 197.612216ms: waiting for machine to come up
	I0729 16:56:15.044833   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:15.045272   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:15.045299   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:15.045239   19335 retry.go:31] will retry after 327.669215ms: waiting for machine to come up
	I0729 16:56:15.374701   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:15.375059   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:15.375081   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:15.375032   19335 retry.go:31] will retry after 438.226444ms: waiting for machine to come up
	I0729 16:56:15.814684   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:15.815075   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:15.815103   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:15.815044   19335 retry.go:31] will retry after 451.065107ms: waiting for machine to come up
	I0729 16:56:16.267236   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:16.267570   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:16.267593   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:16.267543   19335 retry.go:31] will retry after 521.416625ms: waiting for machine to come up
	I0729 16:56:16.790575   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:16.790918   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:16.790965   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:16.790901   19335 retry.go:31] will retry after 941.217092ms: waiting for machine to come up
	I0729 16:56:17.733555   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:17.733988   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:17.734016   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:17.733945   19335 retry.go:31] will retry after 760.216596ms: waiting for machine to come up
	I0729 16:56:18.495589   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:18.496176   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:18.496215   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:18.496148   19335 retry.go:31] will retry after 998.832856ms: waiting for machine to come up
	I0729 16:56:19.496581   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:19.497020   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:19.497049   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:19.496970   19335 retry.go:31] will retry after 1.125358061s: waiting for machine to come up
	I0729 16:56:20.624351   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:20.624730   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:20.624760   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:20.624681   19335 retry.go:31] will retry after 1.46315279s: waiting for machine to come up
	I0729 16:56:22.090636   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:22.091015   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:22.091036   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:22.090991   19335 retry.go:31] will retry after 2.121606251s: waiting for machine to come up
	I0729 16:56:24.215078   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:24.215499   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:24.215527   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:24.215464   19335 retry.go:31] will retry after 2.844738203s: waiting for machine to come up
	I0729 16:56:27.063713   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:27.064234   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:27.064256   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:27.064195   19335 retry.go:31] will retry after 4.421324382s: waiting for machine to come up
	I0729 16:56:31.488709   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:31.489095   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:31.489174   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:31.489085   19335 retry.go:31] will retry after 4.584980769s: waiting for machine to come up
	I0729 16:56:36.077804   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.078382   19312 main.go:141] libmachine: (addons-433102) Found IP for machine: 192.168.39.73
	I0729 16:56:36.078399   19312 main.go:141] libmachine: (addons-433102) Reserving static IP address...
	I0729 16:56:36.078430   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has current primary IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.078830   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find host DHCP lease matching {name: "addons-433102", mac: "52:54:00:d8:3f:00", ip: "192.168.39.73"} in network mk-addons-433102
	I0729 16:56:36.147579   19312 main.go:141] libmachine: (addons-433102) DBG | Getting to WaitForSSH function...
	I0729 16:56:36.147609   19312 main.go:141] libmachine: (addons-433102) Reserved static IP address: 192.168.39.73
	I0729 16:56:36.147628   19312 main.go:141] libmachine: (addons-433102) Waiting for SSH to be available...
	I0729 16:56:36.149793   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.150186   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:36.150218   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.150436   19312 main.go:141] libmachine: (addons-433102) DBG | Using SSH client type: external
	I0729 16:56:36.150459   19312 main.go:141] libmachine: (addons-433102) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa (-rw-------)
	I0729 16:56:36.150488   19312 main.go:141] libmachine: (addons-433102) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 16:56:36.150506   19312 main.go:141] libmachine: (addons-433102) DBG | About to run SSH command:
	I0729 16:56:36.150518   19312 main.go:141] libmachine: (addons-433102) DBG | exit 0
	I0729 16:56:36.286191   19312 main.go:141] libmachine: (addons-433102) DBG | SSH cmd err, output: <nil>: 
	I0729 16:56:36.286473   19312 main.go:141] libmachine: (addons-433102) KVM machine creation complete!
	I0729 16:56:36.286764   19312 main.go:141] libmachine: (addons-433102) Calling .GetConfigRaw
	I0729 16:56:36.287302   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:56:36.287473   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:56:36.287605   19312 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 16:56:36.287619   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:56:36.288873   19312 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 16:56:36.288900   19312 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 16:56:36.288906   19312 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 16:56:36.288911   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:36.291004   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.291310   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:36.291330   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.291499   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:36.291685   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:36.291838   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:36.291990   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:36.292144   19312 main.go:141] libmachine: Using SSH client type: native
	I0729 16:56:36.292301   19312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 16:56:36.292311   19312 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 16:56:36.401591   19312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 16:56:36.401613   19312 main.go:141] libmachine: Detecting the provisioner...
	I0729 16:56:36.401621   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:36.404145   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.404456   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:36.404484   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.404614   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:36.404798   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:36.404955   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:36.405131   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:36.405258   19312 main.go:141] libmachine: Using SSH client type: native
	I0729 16:56:36.405423   19312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 16:56:36.405434   19312 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 16:56:36.519057   19312 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 16:56:36.519122   19312 main.go:141] libmachine: found compatible host: buildroot
	I0729 16:56:36.519131   19312 main.go:141] libmachine: Provisioning with buildroot...
	I0729 16:56:36.519139   19312 main.go:141] libmachine: (addons-433102) Calling .GetMachineName
	I0729 16:56:36.519385   19312 buildroot.go:166] provisioning hostname "addons-433102"
	I0729 16:56:36.519412   19312 main.go:141] libmachine: (addons-433102) Calling .GetMachineName
	I0729 16:56:36.519574   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:36.522009   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.522343   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:36.522390   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.522484   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:36.522647   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:36.522800   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:36.522944   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:36.523138   19312 main.go:141] libmachine: Using SSH client type: native
	I0729 16:56:36.523306   19312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 16:56:36.523318   19312 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-433102 && echo "addons-433102" | sudo tee /etc/hostname
	I0729 16:56:36.652947   19312 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-433102
	
	I0729 16:56:36.652988   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:36.655710   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.656041   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:36.656060   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.656267   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:36.656450   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:36.656655   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:36.656769   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:36.656931   19312 main.go:141] libmachine: Using SSH client type: native
	I0729 16:56:36.657131   19312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 16:56:36.657154   19312 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-433102' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-433102/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-433102' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 16:56:36.781429   19312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 16:56:36.781456   19312 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 16:56:36.781488   19312 buildroot.go:174] setting up certificates
	I0729 16:56:36.781499   19312 provision.go:84] configureAuth start
	I0729 16:56:36.781507   19312 main.go:141] libmachine: (addons-433102) Calling .GetMachineName
	I0729 16:56:36.781752   19312 main.go:141] libmachine: (addons-433102) Calling .GetIP
	I0729 16:56:36.784322   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.784779   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:36.784799   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.784968   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:36.787267   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.787582   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:36.787604   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.787749   19312 provision.go:143] copyHostCerts
	I0729 16:56:36.787820   19312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 16:56:36.787988   19312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 16:56:36.788086   19312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 16:56:36.788160   19312 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.addons-433102 san=[127.0.0.1 192.168.39.73 addons-433102 localhost minikube]
	I0729 16:56:37.053230   19312 provision.go:177] copyRemoteCerts
	I0729 16:56:37.053301   19312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 16:56:37.053329   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:37.056218   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.056619   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.056644   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.056802   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:37.056986   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:37.057148   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:37.057254   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:56:37.144532   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 16:56:37.168808   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 16:56:37.192168   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 16:56:37.215966   19312 provision.go:87] duration metric: took 434.454247ms to configureAuth
	I0729 16:56:37.215995   19312 buildroot.go:189] setting minikube options for container-runtime
	I0729 16:56:37.216181   19312 config.go:182] Loaded profile config "addons-433102": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 16:56:37.216264   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:37.218859   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.219159   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.219179   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.219393   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:37.219596   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:37.219767   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:37.219921   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:37.220156   19312 main.go:141] libmachine: Using SSH client type: native
	I0729 16:56:37.220330   19312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 16:56:37.220347   19312 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 16:56:37.587397   19312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 16:56:37.587427   19312 main.go:141] libmachine: Checking connection to Docker...
	I0729 16:56:37.587439   19312 main.go:141] libmachine: (addons-433102) Calling .GetURL
	I0729 16:56:37.588727   19312 main.go:141] libmachine: (addons-433102) DBG | Using libvirt version 6000000
	I0729 16:56:37.590958   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.591358   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.591392   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.591556   19312 main.go:141] libmachine: Docker is up and running!
	I0729 16:56:37.591571   19312 main.go:141] libmachine: Reticulating splines...
	I0729 16:56:37.591579   19312 client.go:171] duration metric: took 24.971510994s to LocalClient.Create
	I0729 16:56:37.591604   19312 start.go:167] duration metric: took 24.97156689s to libmachine.API.Create "addons-433102"
	I0729 16:56:37.591615   19312 start.go:293] postStartSetup for "addons-433102" (driver="kvm2")
	I0729 16:56:37.591629   19312 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 16:56:37.591649   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:56:37.591919   19312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 16:56:37.591948   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:37.593994   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.594301   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.594325   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.594530   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:37.594733   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:37.594895   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:37.595160   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:56:37.680694   19312 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 16:56:37.684797   19312 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 16:56:37.684818   19312 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 16:56:37.684880   19312 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 16:56:37.684909   19312 start.go:296] duration metric: took 93.279335ms for postStartSetup
	I0729 16:56:37.684944   19312 main.go:141] libmachine: (addons-433102) Calling .GetConfigRaw
	I0729 16:56:37.719242   19312 main.go:141] libmachine: (addons-433102) Calling .GetIP
	I0729 16:56:37.721882   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.722218   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.722245   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.722490   19312 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/config.json ...
	I0729 16:56:37.722664   19312 start.go:128] duration metric: took 25.120027034s to createHost
	I0729 16:56:37.722683   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:37.724959   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.725330   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.725361   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.725526   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:37.725688   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:37.725840   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:37.725972   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:37.726113   19312 main.go:141] libmachine: Using SSH client type: native
	I0729 16:56:37.726324   19312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 16:56:37.726340   19312 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 16:56:37.843053   19312 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722272197.801311336
	
	I0729 16:56:37.843081   19312 fix.go:216] guest clock: 1722272197.801311336
	I0729 16:56:37.843092   19312 fix.go:229] Guest: 2024-07-29 16:56:37.801311336 +0000 UTC Remote: 2024-07-29 16:56:37.722674098 +0000 UTC m=+25.217297489 (delta=78.637238ms)
	I0729 16:56:37.843119   19312 fix.go:200] guest clock delta is within tolerance: 78.637238ms
	I0729 16:56:37.843126   19312 start.go:83] releasing machines lock for "addons-433102", held for 25.240567796s
	I0729 16:56:37.843150   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:56:37.843417   19312 main.go:141] libmachine: (addons-433102) Calling .GetIP
	I0729 16:56:37.845864   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.846166   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.846191   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.846412   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:56:37.846847   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:56:37.847017   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:56:37.847124   19312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 16:56:37.847162   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:37.847250   19312 ssh_runner.go:195] Run: cat /version.json
	I0729 16:56:37.847272   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:37.849637   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.849819   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.849931   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.849955   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.850090   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.850112   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:37.850112   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.850273   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:37.850329   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:37.850420   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:37.850469   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:37.850528   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:37.850584   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:56:37.850610   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:56:37.954570   19312 ssh_runner.go:195] Run: systemctl --version
	I0729 16:56:37.960600   19312 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 16:56:38.126790   19312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 16:56:38.132613   19312 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 16:56:38.132670   19312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 16:56:38.149032   19312 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 16:56:38.149046   19312 start.go:495] detecting cgroup driver to use...
	I0729 16:56:38.149107   19312 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 16:56:38.164447   19312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 16:56:38.177490   19312 docker.go:217] disabling cri-docker service (if available) ...
	I0729 16:56:38.177530   19312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 16:56:38.190727   19312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 16:56:38.203787   19312 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 16:56:38.316392   19312 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 16:56:38.453610   19312 docker.go:233] disabling docker service ...
	I0729 16:56:38.453696   19312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 16:56:38.468227   19312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 16:56:38.481220   19312 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 16:56:38.615689   19312 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 16:56:38.755640   19312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 16:56:38.769430   19312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 16:56:38.787573   19312 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 16:56:38.787635   19312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 16:56:38.797782   19312 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 16:56:38.797847   19312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 16:56:38.808143   19312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 16:56:38.817840   19312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 16:56:38.827533   19312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 16:56:38.837729   19312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 16:56:38.847681   19312 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 16:56:38.863760   19312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 16:56:38.873190   19312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 16:56:38.881868   19312 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 16:56:38.881913   19312 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 16:56:38.895660   19312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 16:56:38.905721   19312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:56:39.031841   19312 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 16:56:39.161975   19312 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 16:56:39.162074   19312 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 16:56:39.166669   19312 start.go:563] Will wait 60s for crictl version
	I0729 16:56:39.166730   19312 ssh_runner.go:195] Run: which crictl
	I0729 16:56:39.170265   19312 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 16:56:39.206771   19312 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 16:56:39.206883   19312 ssh_runner.go:195] Run: crio --version
	I0729 16:56:39.233749   19312 ssh_runner.go:195] Run: crio --version
	I0729 16:56:39.263049   19312 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 16:56:39.264272   19312 main.go:141] libmachine: (addons-433102) Calling .GetIP
	I0729 16:56:39.266943   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:39.267277   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:39.267305   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:39.267488   19312 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 16:56:39.271361   19312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 16:56:39.283241   19312 kubeadm.go:883] updating cluster {Name:addons-433102 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-433102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 16:56:39.283349   19312 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 16:56:39.283408   19312 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 16:56:39.319986   19312 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 16:56:39.320046   19312 ssh_runner.go:195] Run: which lz4
	I0729 16:56:39.324087   19312 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 16:56:39.328259   19312 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 16:56:39.328284   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 16:56:40.634045   19312 crio.go:462] duration metric: took 1.309991465s to copy over tarball
	I0729 16:56:40.634124   19312 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 16:56:42.859750   19312 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.225591595s)
	I0729 16:56:42.859781   19312 crio.go:469] duration metric: took 2.225705873s to extract the tarball
	I0729 16:56:42.859789   19312 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 16:56:42.897612   19312 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 16:56:42.954927   19312 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 16:56:42.954947   19312 cache_images.go:84] Images are preloaded, skipping loading
	I0729 16:56:42.954954   19312 kubeadm.go:934] updating node { 192.168.39.73 8443 v1.30.3 crio true true} ...
	I0729 16:56:42.955063   19312 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-433102 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-433102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 16:56:42.955149   19312 ssh_runner.go:195] Run: crio config
	I0729 16:56:43.005626   19312 cni.go:84] Creating CNI manager for ""
	I0729 16:56:43.005648   19312 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 16:56:43.005658   19312 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 16:56:43.005681   19312 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.73 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-433102 NodeName:addons-433102 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 16:56:43.005834   19312 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-433102"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 16:56:43.005905   19312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 16:56:43.016291   19312 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 16:56:43.016348   19312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 16:56:43.026602   19312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 16:56:43.046001   19312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 16:56:43.064806   19312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0729 16:56:43.083849   19312 ssh_runner.go:195] Run: grep 192.168.39.73	control-plane.minikube.internal$ /etc/hosts
	I0729 16:56:43.087981   19312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 16:56:43.100088   19312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:56:43.224424   19312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:56:43.241475   19312 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102 for IP: 192.168.39.73
	I0729 16:56:43.241507   19312 certs.go:194] generating shared ca certs ...
	I0729 16:56:43.241523   19312 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.241661   19312 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 16:56:43.314518   19312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt ...
	I0729 16:56:43.314543   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt: {Name:mk7430f93e4eb66a7ae2250e2209426ae1a6ec80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.314691   19312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key ...
	I0729 16:56:43.314701   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key: {Name:mk343508971c6b777f48b3cf3c00a2a2d9184e15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.314773   19312 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 16:56:43.589451   19312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt ...
	I0729 16:56:43.589521   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt: {Name:mk193397c3fd162eb6f6b5a8a056aeb2bab9799e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.589701   19312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key ...
	I0729 16:56:43.589715   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key: {Name:mk0532d535b11308b747e8b70f9fa02e4226d30c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.589811   19312 certs.go:256] generating profile certs ...
	I0729 16:56:43.589880   19312 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.key
	I0729 16:56:43.589899   19312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt with IP's: []
	I0729 16:56:43.677844   19312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt ...
	I0729 16:56:43.677881   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: {Name:mk71f2a926e336f40bb13877ebd845ea67b83a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.678091   19312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.key ...
	I0729 16:56:43.678108   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.key: {Name:mkbfaa04a247d8372ad86365fe1cfd8ea3a8259e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.678220   19312 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.key.b26ac08d
	I0729 16:56:43.678247   19312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.crt.b26ac08d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.73]
	I0729 16:56:43.839546   19312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.crt.b26ac08d ...
	I0729 16:56:43.839577   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.crt.b26ac08d: {Name:mk71478c571d6b22412d1acff607c39fddebb84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.839754   19312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.key.b26ac08d ...
	I0729 16:56:43.839770   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.key.b26ac08d: {Name:mkc1fc6a26774617ef99371102f09cfd9edc163c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.839876   19312 certs.go:381] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.crt.b26ac08d -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.crt
	I0729 16:56:43.839967   19312 certs.go:385] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.key.b26ac08d -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.key
	I0729 16:56:43.840047   19312 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/proxy-client.key
	I0729 16:56:43.840071   19312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/proxy-client.crt with IP's: []
	I0729 16:56:43.913329   19312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/proxy-client.crt ...
	I0729 16:56:43.913356   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/proxy-client.crt: {Name:mk70ad2528d19153e54b6e99edab678b10352f19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.913523   19312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/proxy-client.key ...
	I0729 16:56:43.913537   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/proxy-client.key: {Name:mkf0a666bf793c173adb376a187ba2c0a6db82a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.913724   19312 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 16:56:43.913769   19312 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 16:56:43.913803   19312 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 16:56:43.913844   19312 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 16:56:43.914381   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 16:56:43.941664   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 16:56:43.966791   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 16:56:43.988404   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 16:56:44.011257   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 16:56:44.033777   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 16:56:44.056287   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 16:56:44.078663   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 16:56:44.102892   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 16:56:44.128607   19312 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 16:56:44.147578   19312 ssh_runner.go:195] Run: openssl version
	I0729 16:56:44.154852   19312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 16:56:44.168659   19312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:56:44.172919   19312 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:56:44.172968   19312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:56:44.178532   19312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 16:56:44.188955   19312 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 16:56:44.193148   19312 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 16:56:44.193199   19312 kubeadm.go:392] StartCluster: {Name:addons-433102 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-433102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:56:44.193269   19312 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 16:56:44.193304   19312 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 16:56:44.236499   19312 cri.go:89] found id: ""
	I0729 16:56:44.236579   19312 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 16:56:44.247426   19312 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:56:44.256812   19312 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:56:44.266162   19312 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 16:56:44.266181   19312 kubeadm.go:157] found existing configuration files:
	
	I0729 16:56:44.266223   19312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 16:56:44.275002   19312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 16:56:44.275050   19312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 16:56:44.284208   19312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 16:56:44.292818   19312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 16:56:44.292874   19312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 16:56:44.301657   19312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 16:56:44.310340   19312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 16:56:44.310404   19312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:56:44.319527   19312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 16:56:44.327978   19312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 16:56:44.328029   19312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:56:44.337088   19312 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 16:56:44.401919   19312 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 16:56:44.402182   19312 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 16:56:44.523563   19312 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 16:56:44.523734   19312 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 16:56:44.523909   19312 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 16:56:44.723453   19312 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 16:56:44.756902   19312 out.go:204]   - Generating certificates and keys ...
	I0729 16:56:44.757015   19312 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 16:56:44.757123   19312 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 16:56:45.031214   19312 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 16:56:45.204754   19312 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 16:56:45.404215   19312 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 16:56:45.599441   19312 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 16:56:45.830652   19312 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 16:56:45.830862   19312 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-433102 localhost] and IPs [192.168.39.73 127.0.0.1 ::1]
	I0729 16:56:45.946196   19312 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 16:56:45.946480   19312 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-433102 localhost] and IPs [192.168.39.73 127.0.0.1 ::1]
	I0729 16:56:46.088178   19312 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 16:56:46.199107   19312 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 16:56:46.264572   19312 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 16:56:46.264810   19312 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 16:56:46.465367   19312 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 16:56:46.571240   19312 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 16:56:46.611748   19312 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 16:56:46.731598   19312 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 16:56:47.004895   19312 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 16:56:47.005598   19312 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 16:56:47.007974   19312 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 16:56:47.009895   19312 out.go:204]   - Booting up control plane ...
	I0729 16:56:47.009995   19312 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 16:56:47.010091   19312 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 16:56:47.010176   19312 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 16:56:47.026160   19312 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 16:56:47.027140   19312 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 16:56:47.027203   19312 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 16:56:47.152677   19312 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 16:56:47.152752   19312 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 16:56:47.654669   19312 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.271717ms
	I0729 16:56:47.654750   19312 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 16:56:52.657144   19312 kubeadm.go:310] [api-check] The API server is healthy after 5.001911747s
	I0729 16:56:52.670659   19312 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 16:56:52.684613   19312 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 16:56:52.711926   19312 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 16:56:52.712173   19312 kubeadm.go:310] [mark-control-plane] Marking the node addons-433102 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 16:56:52.723734   19312 kubeadm.go:310] [bootstrap-token] Using token: w4q1ef.q8wav9dzw9ik2bkk
	I0729 16:56:52.725222   19312 out.go:204]   - Configuring RBAC rules ...
	I0729 16:56:52.725344   19312 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 16:56:52.730931   19312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 16:56:52.742158   19312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 16:56:52.746690   19312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 16:56:52.750329   19312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 16:56:52.753634   19312 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 16:56:53.065312   19312 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 16:56:53.516649   19312 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 16:56:54.065063   19312 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 16:56:54.065094   19312 kubeadm.go:310] 
	I0729 16:56:54.065170   19312 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 16:56:54.065182   19312 kubeadm.go:310] 
	I0729 16:56:54.065296   19312 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 16:56:54.065319   19312 kubeadm.go:310] 
	I0729 16:56:54.065370   19312 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 16:56:54.065462   19312 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 16:56:54.065542   19312 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 16:56:54.065550   19312 kubeadm.go:310] 
	I0729 16:56:54.065620   19312 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 16:56:54.065628   19312 kubeadm.go:310] 
	I0729 16:56:54.065705   19312 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 16:56:54.065720   19312 kubeadm.go:310] 
	I0729 16:56:54.065802   19312 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 16:56:54.065915   19312 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 16:56:54.066014   19312 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 16:56:54.066024   19312 kubeadm.go:310] 
	I0729 16:56:54.066130   19312 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 16:56:54.066250   19312 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 16:56:54.066261   19312 kubeadm.go:310] 
	I0729 16:56:54.066388   19312 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token w4q1ef.q8wav9dzw9ik2bkk \
	I0729 16:56:54.066543   19312 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 16:56:54.066572   19312 kubeadm.go:310] 	--control-plane 
	I0729 16:56:54.066584   19312 kubeadm.go:310] 
	I0729 16:56:54.066704   19312 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 16:56:54.066713   19312 kubeadm.go:310] 
	I0729 16:56:54.066811   19312 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token w4q1ef.q8wav9dzw9ik2bkk \
	I0729 16:56:54.066946   19312 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 16:56:54.067182   19312 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 16:56:54.067214   19312 cni.go:84] Creating CNI manager for ""
	I0729 16:56:54.067223   19312 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 16:56:54.069009   19312 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 16:56:54.070272   19312 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 16:56:54.081181   19312 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 16:56:54.099028   19312 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 16:56:54.099155   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:54.099192   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-433102 minikube.k8s.io/updated_at=2024_07_29T16_56_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=addons-433102 minikube.k8s.io/primary=true
	I0729 16:56:54.131399   19312 ops.go:34] apiserver oom_adj: -16
	I0729 16:56:54.228627   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:54.729164   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:55.228783   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:55.729076   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:56.229015   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:56.729127   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:57.228752   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:57.729245   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:58.228897   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:58.729020   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:59.229274   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:59.729385   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:00.229493   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:00.729518   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:01.229294   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:01.728836   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:02.229405   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:02.729489   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:03.229684   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:03.729378   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:04.229277   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:04.729357   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:05.229712   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:05.729553   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:06.229634   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:06.729285   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:06.810310   19312 kubeadm.go:1113] duration metric: took 12.711197871s to wait for elevateKubeSystemPrivileges
	I0729 16:57:06.810349   19312 kubeadm.go:394] duration metric: took 22.617153204s to StartCluster
	I0729 16:57:06.810382   19312 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:57:06.810539   19312 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 16:57:06.811023   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:57:06.811247   19312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 16:57:06.811255   19312 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 16:57:06.811317   19312 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 16:57:06.811427   19312 addons.go:69] Setting gcp-auth=true in profile "addons-433102"
	I0729 16:57:06.811450   19312 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-433102"
	I0729 16:57:06.811457   19312 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-433102"
	I0729 16:57:06.811467   19312 addons.go:69] Setting default-storageclass=true in profile "addons-433102"
	I0729 16:57:06.811472   19312 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-433102"
	I0729 16:57:06.811448   19312 config.go:182] Loaded profile config "addons-433102": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 16:57:06.811503   19312 addons.go:69] Setting ingress=true in profile "addons-433102"
	I0729 16:57:06.811503   19312 addons.go:69] Setting volcano=true in profile "addons-433102"
	I0729 16:57:06.811512   19312 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-433102"
	I0729 16:57:06.811515   19312 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-433102"
	I0729 16:57:06.811521   19312 addons.go:234] Setting addon ingress=true in "addons-433102"
	I0729 16:57:06.811525   19312 addons.go:234] Setting addon volcano=true in "addons-433102"
	I0729 16:57:06.811498   19312 addons.go:69] Setting helm-tiller=true in profile "addons-433102"
	I0729 16:57:06.811546   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.811550   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.811560   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.811568   19312 addons.go:234] Setting addon helm-tiller=true in "addons-433102"
	I0729 16:57:06.811595   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.811654   19312 addons.go:69] Setting cloud-spanner=true in profile "addons-433102"
	I0729 16:57:06.811677   19312 addons.go:234] Setting addon cloud-spanner=true in "addons-433102"
	I0729 16:57:06.811679   19312 addons.go:69] Setting yakd=true in profile "addons-433102"
	I0729 16:57:06.811696   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.811705   19312 addons.go:234] Setting addon yakd=true in "addons-433102"
	I0729 16:57:06.811728   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.811982   19312 addons.go:69] Setting ingress-dns=true in profile "addons-433102"
	I0729 16:57:06.812002   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812009   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812013   19312 addons.go:69] Setting inspektor-gadget=true in profile "addons-433102"
	I0729 16:57:06.812023   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812032   19312 addons.go:234] Setting addon inspektor-gadget=true in "addons-433102"
	I0729 16:57:06.812033   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812036   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812036   19312 addons.go:69] Setting volumesnapshots=true in profile "addons-433102"
	I0729 16:57:06.812045   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812053   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.812061   19312 addons.go:234] Setting addon volumesnapshots=true in "addons-433102"
	I0729 16:57:06.811487   19312 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-433102"
	I0729 16:57:06.812081   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812089   19312 addons.go:69] Setting metrics-server=true in profile "addons-433102"
	I0729 16:57:06.812023   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812107   19312 addons.go:234] Setting addon metrics-server=true in "addons-433102"
	I0729 16:57:06.812111   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812119   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812008   19312 addons.go:234] Setting addon ingress-dns=true in "addons-433102"
	I0729 16:57:06.812180   19312 addons.go:69] Setting storage-provisioner=true in profile "addons-433102"
	I0729 16:57:06.812194   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.812206   19312 addons.go:234] Setting addon storage-provisioner=true in "addons-433102"
	I0729 16:57:06.812354   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.812369   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812384   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.811489   19312 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-433102"
	I0729 16:57:06.812455   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812490   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812186   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812535   19312 addons.go:69] Setting registry=true in profile "addons-433102"
	I0729 16:57:06.812556   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812556   19312 addons.go:234] Setting addon registry=true in "addons-433102"
	I0729 16:57:06.812532   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.811462   19312 mustload.go:65] Loading cluster: addons-433102
	I0729 16:57:06.812589   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812608   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.812639   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.812710   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812725   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812728   19312 config.go:182] Loaded profile config "addons-433102": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 16:57:06.812764   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.812853   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812872   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812952   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812976   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812999   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.813025   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.813039   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.813254   19312 out.go:177] * Verifying Kubernetes components...
	I0729 16:57:06.815519   19312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:57:06.832562   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38689
	I0729 16:57:06.832584   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34009
	I0729 16:57:06.832725   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36721
	I0729 16:57:06.832738   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I0729 16:57:06.833048   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.833293   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.833384   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.833442   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.833611   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.833636   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.833832   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.833849   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.833976   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.833987   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.834101   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.834122   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.834182   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.834223   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.834226   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.834503   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.834518   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.834729   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.834759   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.834882   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.834919   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.838741   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40279
	I0729 16:57:06.838758   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42929
	I0729 16:57:06.838903   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.838913   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.838919   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.838937   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.838950   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.838995   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.839190   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.839229   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.840250   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.841011   19312 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-433102"
	I0729 16:57:06.841054   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.841406   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.841440   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.841930   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.841948   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.846404   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.847101   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.847135   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.850296   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32823
	I0729 16:57:06.850465   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.851050   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.851158   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.851178   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.851625   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.851642   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.851704   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.852082   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.852485   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.852520   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.852650   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.852671   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.861978   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44237
	I0729 16:57:06.862388   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.862868   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.862888   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.863261   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.863467   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.866329   19312 addons.go:234] Setting addon default-storageclass=true in "addons-433102"
	I0729 16:57:06.866402   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.866761   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.866779   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.868727   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43689
	I0729 16:57:06.869150   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.870414   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.870433   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.870774   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.870958   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.871680   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I0729 16:57:06.872075   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.882461   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33853
	I0729 16:57:06.882584   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.882601   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.882610   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44791
	I0729 16:57:06.882675   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.883038   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.883525   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.883538   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.883867   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.883984   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0729 16:57:06.884497   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.884515   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.884956   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.885569   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.885581   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.885882   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.886022   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.888151   19312 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 16:57:06.889225   19312 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 16:57:06.889245   19312 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 16:57:06.889264   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.889839   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41247
	I0729 16:57:06.889856   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.889961   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.892231   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 16:57:06.892647   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.893171   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.893197   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.893631   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.893819   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.893969   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.894092   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.894678   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.894869   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 16:57:06.895976   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 16:57:06.897109   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 16:57:06.897517   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.897538   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.897923   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.898479   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.898521   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.898785   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.898805   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.899327   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 16:57:06.900483   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 16:57:06.901433   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 16:57:06.902502   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 16:57:06.903157   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.903329   19312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 16:57:06.903345   19312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 16:57:06.903363   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.903700   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.903719   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.906917   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.907327   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.907348   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.907416   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.908089   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.908126   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.908465   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.908653   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.908817   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.908954   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.909373   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34241
	I0729 16:57:06.909484   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41405
	I0729 16:57:06.910511   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.910511   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.910979   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.910997   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.911083   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.911097   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.912841   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.912848   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.912849   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40417
	I0729 16:57:06.912903   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40143
	I0729 16:57:06.912975   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44807
	I0729 16:57:06.913185   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.913266   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.913659   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.913686   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.913870   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45553
	I0729 16:57:06.913878   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.914557   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.914647   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.914670   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.915008   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.915152   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.915175   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.915510   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.915567   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0729 16:57:06.915594   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.915610   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.915679   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36195
	I0729 16:57:06.915771   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.915907   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.915980   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.916242   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.916249   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.916261   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.916314   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:06.916326   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:06.917751   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I0729 16:57:06.917764   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.917808   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:06.917815   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:06.917824   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:06.917830   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:06.917750   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.917900   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.918173   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.918183   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.918258   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:06.918270   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.918287   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:06.918290   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.918296   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:06.918354   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	W0729 16:57:06.918380   19312 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0729 16:57:06.918824   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.918839   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.918898   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.919168   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.919287   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.919523   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.919547   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.919577   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.919806   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.920078   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.920620   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.920641   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.920964   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.921215   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.921490   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.921518   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.921561   19312 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 16:57:06.921600   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.921628   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.921855   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.922592   19312 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 16:57:06.922609   19312 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 16:57:06.922612   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.922626   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.923716   19312 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0729 16:57:06.924967   19312 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0729 16:57:06.925784   19312 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0729 16:57:06.925802   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0729 16:57:06.925818   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.925820   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.926354   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.926389   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.926538   19312 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 16:57:06.926550   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 16:57:06.926564   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.927126   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.927324   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.927604   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.927926   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.928243   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0729 16:57:06.928650   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.929204   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.929231   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.929572   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.929727   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.931352   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.931649   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.932175   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.932358   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.932375   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.932528   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.932785   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.932811   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.932836   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.933029   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.933116   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.933363   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.933657   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.933793   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42489
	I0729 16:57:06.933872   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.934008   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.934375   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.934445   19312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 16:57:06.935025   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.935044   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.935353   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.935952   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.935987   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.937403   19312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 16:57:06.939228   19312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 16:57:06.940495   19312 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 16:57:06.940521   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 16:57:06.940538   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.943540   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.943873   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.943893   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.944204   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.944436   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.944600   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.944762   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.948559   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I0729 16:57:06.948678   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33693
	I0729 16:57:06.948998   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.949189   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.949433   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.949449   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.949759   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.949776   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.949840   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.950210   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.950260   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.950486   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.952238   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.952290   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.954394   19312 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0729 16:57:06.954491   19312 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 16:57:06.955517   19312 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 16:57:06.955540   19312 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 16:57:06.955558   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.956194   19312 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 16:57:06.956208   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 16:57:06.956224   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.956522   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I0729 16:57:06.957287   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I0729 16:57:06.957445   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.957837   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.958123   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.958139   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.958624   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.958640   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.958997   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.959192   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.959248   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45551
	I0729 16:57:06.959396   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.959580   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.959645   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.959707   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.959838   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.959857   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.960007   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.960229   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.960404   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.960457   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.960481   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44693
	I0729 16:57:06.960698   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.960719   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.960731   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.962460   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.962525   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.962610   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.962979   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.962992   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.963054   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.963126   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.963173   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.963227   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.963241   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.963266   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.963519   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.963575   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.963748   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.963997   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 16:57:06.965094   19312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 16:57:06.965108   19312 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 16:57:06.965123   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.965190   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.965725   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.966444   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.966771   19312 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 16:57:06.966783   19312 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 16:57:06.966797   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.967527   19312 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 16:57:06.967552   19312 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:57:06.968730   19312 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 16:57:06.968746   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 16:57:06.968772   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.968847   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.968915   19312 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:57:06.968930   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 16:57:06.968946   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.969364   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.969400   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.969851   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41757
	I0729 16:57:06.969896   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.970223   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.970404   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.970541   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.970817   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.971284   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.971300   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.971788   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.972638   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.972642   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.973109   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.973128   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.973157   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.973347   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.973501   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.973643   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.973783   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.974039   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.974060   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.974262   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.974438   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.974577   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.974722   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.975135   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.975588   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.975658   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.975793   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
	I0729 16:57:06.976056   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.976125   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.976201   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.976461   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.976661   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.976677   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.976754   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.977001   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.977157   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.977207   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.978181   19312 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 16:57:06.979207   19312 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 16:57:06.979559   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34881
	I0729 16:57:06.979915   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.980276   19312 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 16:57:06.980290   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 16:57:06.980301   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.980318   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.980343   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.981143   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.981330   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.983033   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.983087   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.983394   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.983417   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.983556   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.983715   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.983857   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.983961   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.984457   19312 out.go:177]   - Using image docker.io/busybox:stable
	I0729 16:57:06.985548   19312 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0729 16:57:06.986635   19312 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 16:57:06.986648   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 16:57:06.986662   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.989224   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.989638   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.989663   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.989807   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.989984   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.990135   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.990273   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:07.297462   19312 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 16:57:07.297484   19312 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 16:57:07.408626   19312 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0729 16:57:07.408650   19312 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0729 16:57:07.409109   19312 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 16:57:07.409142   19312 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 16:57:07.424659   19312 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 16:57:07.424679   19312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 16:57:07.456352   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 16:57:07.486037   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 16:57:07.500014   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 16:57:07.501860   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 16:57:07.515215   19312 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 16:57:07.515238   19312 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 16:57:07.518182   19312 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 16:57:07.518204   19312 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 16:57:07.521101   19312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:57:07.521129   19312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 16:57:07.523436   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:57:07.525728   19312 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 16:57:07.525746   19312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 16:57:07.562242   19312 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 16:57:07.562263   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 16:57:07.564214   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 16:57:07.565547   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 16:57:07.576715   19312 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 16:57:07.576741   19312 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 16:57:07.603695   19312 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 16:57:07.603717   19312 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0729 16:57:07.620312   19312 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 16:57:07.620334   19312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 16:57:07.633851   19312 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 16:57:07.633877   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 16:57:07.645011   19312 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 16:57:07.645044   19312 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 16:57:07.719191   19312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 16:57:07.719215   19312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 16:57:07.729253   19312 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 16:57:07.729272   19312 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 16:57:07.734429   19312 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 16:57:07.734446   19312 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 16:57:07.758295   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 16:57:07.785630   19312 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 16:57:07.785652   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 16:57:07.815154   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 16:57:07.828403   19312 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 16:57:07.828437   19312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 16:57:07.878577   19312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 16:57:07.878606   19312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 16:57:07.879248   19312 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 16:57:07.879269   19312 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 16:57:07.924213   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 16:57:08.013979   19312 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 16:57:08.014005   19312 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 16:57:08.048829   19312 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 16:57:08.048849   19312 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 16:57:08.073430   19312 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 16:57:08.073457   19312 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 16:57:08.132628   19312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 16:57:08.132651   19312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 16:57:08.263468   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 16:57:08.273098   19312 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 16:57:08.273127   19312 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 16:57:08.299081   19312 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 16:57:08.299101   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 16:57:08.481849   19312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 16:57:08.481873   19312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 16:57:08.485222   19312 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 16:57:08.485239   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 16:57:08.704163   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 16:57:08.748023   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 16:57:08.968985   19312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 16:57:08.969007   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 16:57:09.108510   19312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 16:57:09.108541   19312 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 16:57:09.302347   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.845959947s)
	I0729 16:57:09.302403   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:09.302413   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:09.302758   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:09.302806   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:09.302815   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:09.302830   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:09.302840   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:09.303108   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:09.303130   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:09.303153   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:09.417796   19312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 16:57:09.417823   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 16:57:09.755920   19312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 16:57:09.755943   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 16:57:10.114626   19312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 16:57:10.114653   19312 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 16:57:10.273089   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.787020888s)
	I0729 16:57:10.273142   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:10.273152   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:10.273406   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:10.273465   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:10.273483   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:10.273499   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:10.273511   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:10.273765   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:10.274296   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:10.274317   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:10.463534   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 16:57:11.019376   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.519329979s)
	I0729 16:57:11.019420   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:11.019417   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.517537534s)
	I0729 16:57:11.019431   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:11.019441   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:11.019451   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:11.019471   19312 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.498312306s)
	I0729 16:57:11.019501   19312 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.498380294s)
	I0729 16:57:11.019500   19312 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 16:57:11.019799   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:11.019817   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:11.019826   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:11.019835   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:11.019883   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:11.019921   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:11.019941   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:11.019957   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:11.020366   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:11.020384   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:11.020568   19312 node_ready.go:35] waiting up to 6m0s for node "addons-433102" to be "Ready" ...
	I0729 16:57:11.020633   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:11.020657   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:11.020666   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:11.052350   19312 node_ready.go:49] node "addons-433102" has status "Ready":"True"
	I0729 16:57:11.052370   19312 node_ready.go:38] duration metric: took 31.783838ms for node "addons-433102" to be "Ready" ...
	I0729 16:57:11.052378   19312 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 16:57:11.107413   19312 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:11.135533   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:11.135560   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:11.135845   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:11.135864   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:11.627560   19312 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-433102" context rescaled to 1 replicas
	I0729 16:57:12.492905   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.969439321s)
	I0729 16:57:12.492968   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:12.492980   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:12.492990   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.928741596s)
	I0729 16:57:12.493027   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:12.493043   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:12.493245   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:12.493348   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:12.493364   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:12.493373   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:12.493380   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:12.493353   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:12.493311   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:12.493287   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:12.493410   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:12.493499   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:12.493581   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:12.493594   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:12.493802   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:12.493816   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:12.591070   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:12.591090   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:12.591361   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:12.591379   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:13.130288   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:13.988798   19312 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 16:57:13.988835   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:13.992125   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:13.992589   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:13.992614   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:13.992785   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:13.992990   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:13.993155   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:13.993298   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:14.384748   19312 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 16:57:14.447148   19312 addons.go:234] Setting addon gcp-auth=true in "addons-433102"
	I0729 16:57:14.447204   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:14.447526   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:14.447553   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:14.463567   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37911
	I0729 16:57:14.463985   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:14.464504   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:14.464525   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:14.464855   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:14.465475   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:14.465507   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:14.481099   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0729 16:57:14.481616   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:14.482085   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:14.482106   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:14.482514   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:14.482695   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:14.484438   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:14.484643   19312 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 16:57:14.484666   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:14.487307   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:14.487739   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:14.487763   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:14.487869   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:14.488007   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:14.488174   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:14.488316   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:15.393477   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.827898983s)
	I0729 16:57:15.393524   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.393533   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.393553   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.63522486s)
	I0729 16:57:15.393594   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.393605   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.578418649s)
	I0729 16:57:15.393638   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.393654   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.393610   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.393699   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.469454514s)
	I0729 16:57:15.393724   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.393733   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.393775   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.130278515s)
	I0729 16:57:15.393799   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.393811   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.393926   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.393950   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.393958   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.393962   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.393985   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.393994   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.394002   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.394008   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.394058   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.394082   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.394088   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.394098   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.394106   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.394165   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.394188   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.394195   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.394202   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.394208   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.394241   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.394258   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.394277   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.394285   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.394294   19312 addons.go:475] Verifying addon ingress=true in "addons-433102"
	I0729 16:57:15.394647   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.69044496s)
	W0729 16:57:15.394690   19312 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 16:57:15.394717   19312 retry.go:31] will retry after 199.459612ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 16:57:15.394801   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.646748098s)
	I0729 16:57:15.394818   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.394863   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.394923   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.394950   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.394956   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.395153   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.395166   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.395193   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.395201   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.395728   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.395761   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.395768   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.395775   19312 addons.go:475] Verifying addon metrics-server=true in "addons-433102"
	I0729 16:57:15.395932   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.395967   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.395980   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.393965   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.396179   19312 out.go:177] * Verifying ingress addon...
	I0729 16:57:15.396208   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.396236   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.396243   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.396250   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.396257   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.396842   19312 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-433102 service yakd-dashboard -n yakd-dashboard
	
	I0729 16:57:15.397074   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.398335   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.397103   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.397165   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.398438   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.398447   19312 addons.go:475] Verifying addon registry=true in "addons-433102"
	I0729 16:57:15.398484   19312 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 16:57:15.397496   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.399433   19312 out.go:177] * Verifying registry addon...
	I0729 16:57:15.401193   19312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 16:57:15.423712   19312 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 16:57:15.423732   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:15.423896   19312 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 16:57:15.423914   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:15.594578   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 16:57:15.613425   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:15.903568   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:15.908862   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:16.417671   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:16.431413   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:16.602384   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.138759617s)
	I0729 16:57:16.602400   19312 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.11774003s)
	I0729 16:57:16.602435   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:16.602447   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:16.602747   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:16.602785   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:16.602808   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:16.602810   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:16.602899   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:16.603133   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:16.603179   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:16.603187   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:16.603196   19312 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-433102"
	I0729 16:57:16.604315   19312 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 16:57:16.604381   19312 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 16:57:16.606034   19312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 16:57:16.606951   19312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 16:57:16.609075   19312 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 16:57:16.609098   19312 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 16:57:16.646493   19312 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 16:57:16.646515   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:16.720570   19312 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 16:57:16.720599   19312 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 16:57:16.774222   19312 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 16:57:16.774243   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 16:57:16.810790   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 16:57:16.903307   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:16.907714   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:17.115407   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:17.403841   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:17.408300   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:17.614642   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:17.621216   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:17.784909   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.190283262s)
	I0729 16:57:17.784963   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:17.784985   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:17.785362   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:17.785401   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:17.785412   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:17.785423   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:17.785447   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:17.785687   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:17.785740   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:17.785752   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:17.903881   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:17.926744   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:18.171326   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:18.266967   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.456140834s)
	I0729 16:57:18.267029   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:18.267043   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:18.267406   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:18.267415   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:18.267432   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:18.267444   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:18.267458   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:18.267682   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:18.267739   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:18.269844   19312 addons.go:475] Verifying addon gcp-auth=true in "addons-433102"
	I0729 16:57:18.271303   19312 out.go:177] * Verifying gcp-auth addon...
	I0729 16:57:18.273680   19312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 16:57:18.325271   19312 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 16:57:18.325290   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:18.405046   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:18.422894   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:18.614051   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:18.778293   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:18.903612   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:18.908258   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:19.193513   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:19.277474   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:19.408526   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:19.416265   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:19.613839   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:19.779219   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:19.904099   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:19.907010   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:20.123984   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:20.125246   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:20.278537   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:20.403683   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:20.408831   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:20.612898   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:20.777843   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:20.902537   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:20.905192   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:21.113001   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:21.277864   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:21.402610   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:21.404960   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:21.612680   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:21.777088   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:21.902807   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:21.906852   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:22.114122   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:22.283120   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:22.403371   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:22.405809   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:22.695950   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:22.701455   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:22.779121   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:22.903390   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:22.905223   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:23.115858   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:23.278140   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:23.403990   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:23.407018   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:23.613953   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:23.614565   19312 pod_ready.go:97] pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:23 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:07 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:07 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:07 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.73 HostIPs:[{IP:192.168.39.
73}] PodIP: PodIPs:[] StartTime:2024-07-29 16:57:07 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-29 16:57:12 +0000 UTC,FinishedAt:2024-07-29 16:57:22 +0000 UTC,ContainerID:cri-o://47e73e792b774d9238a1fb14fafa9aebc4040430d69afa682d4e31b1270ec754,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://47e73e792b774d9238a1fb14fafa9aebc4040430d69afa682d4e31b1270ec754 Started:0xc001bafb00 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 16:57:23.614590   19312 pod_ready.go:81] duration metric: took 12.507153955s for pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace to be "Ready" ...
	E0729 16:57:23.614604   19312 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:23 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:07 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:07 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:07 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.73 HostIPs:[{IP:192.168.39.73}] PodIP: PodIPs:[] StartTime:2024-07-29 16:57:07 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-29 16:57:12 +0000 UTC,FinishedAt:2024-07-29 16:57:22 +0000 UTC,ContainerID:cri-o://47e73e792b774d9238a1fb14fafa9aebc4040430d69afa682d4e31b1270ec754,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://47e73e792b774d9238a1fb14fafa9aebc4040430d69afa682d4e31b1270ec754 Started:0xc001bafb00 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 16:57:23.614613   19312 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:23.777914   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:23.902844   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:23.905621   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:24.224642   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:24.277575   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:24.405796   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:24.405825   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:24.613195   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:24.777773   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:24.902714   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:24.905362   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:25.112474   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:25.279368   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:25.403302   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:25.406213   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:25.613035   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:25.619980   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:25.777065   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:25.902884   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:25.905350   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:26.112358   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:26.277144   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:26.402718   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:26.405557   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:26.612239   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:26.777061   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:26.902679   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:26.905375   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:27.686324   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:27.686774   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:27.690140   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:27.690563   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:27.691125   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:27.696218   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:27.777885   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:27.902515   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:27.905330   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:28.112117   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:28.277451   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:28.403392   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:28.406209   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:28.611440   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:28.779451   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:28.903222   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:28.905586   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:29.112492   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:29.277476   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:29.403167   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:29.408534   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:29.612054   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:29.777447   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:29.904379   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:29.906122   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:30.113510   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:30.119838   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:30.277045   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:30.403451   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:30.406227   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:30.614949   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:30.857320   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:30.903696   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:30.911626   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:31.113630   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:31.279127   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:31.402639   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:31.407704   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:31.614027   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:31.777082   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:31.909121   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:31.911681   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:32.117666   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:32.128537   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:32.278196   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:32.403892   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:32.406590   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:32.612910   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:32.777283   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:32.903110   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:32.905490   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:33.111764   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:33.277343   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:33.404097   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:33.406393   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:33.613046   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:33.776967   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:33.902616   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:33.905602   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:34.112779   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:34.277746   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:34.403568   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:34.406667   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:34.612680   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:34.620199   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:34.777553   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:34.985527   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:34.985970   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:35.113845   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:35.277318   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:35.405007   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:35.413462   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:35.612797   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:35.777283   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:35.904117   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:35.907818   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:36.112389   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:36.277717   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:36.420787   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:36.421927   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:36.613000   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:36.623153   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:36.777136   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:36.903635   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:36.910708   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:37.113089   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:37.277142   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:37.402841   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:37.405547   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:37.612151   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:37.782575   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:37.905288   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:37.908431   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:38.167872   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:38.381328   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:38.403328   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:38.411816   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:38.613236   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:38.777206   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:38.903188   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:38.906712   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:39.112883   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:39.120726   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:39.277758   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:39.402942   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:39.405737   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:39.612767   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:39.777788   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:39.904433   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:39.909108   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:40.113148   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:40.277793   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:40.402504   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:40.405276   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:40.611795   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:40.777735   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:40.902618   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:40.905334   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:41.119939   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:41.123696   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:41.278414   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:41.403233   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:41.406557   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:41.614897   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:41.784850   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:41.902500   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:41.911161   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:42.122719   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:42.283705   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:42.405069   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:42.409012   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:42.618797   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:42.777876   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:42.906548   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:42.908532   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:43.113780   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:43.130544   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:43.278006   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:43.402887   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:43.412910   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:43.612565   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:43.777555   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:43.903275   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:43.905651   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:44.113731   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:44.278675   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:44.403606   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:44.405716   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:44.613168   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:44.777760   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:44.902801   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:44.907920   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:45.113060   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:45.278677   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:45.402719   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:45.405187   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:45.613123   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:45.620842   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:45.778692   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:45.903122   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:45.906138   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:46.113059   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:46.277495   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:46.405608   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:46.405667   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:46.612562   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:46.777841   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:46.903480   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:46.907870   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:47.113268   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:47.277534   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:47.407963   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:47.416666   19312 kapi.go:107] duration metric: took 32.015468158s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 16:57:47.796433   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:47.798297   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:47.799488   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:47.903213   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:48.113820   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:48.277097   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:48.402456   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:48.612623   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:48.619322   19312 pod_ready.go:92] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"True"
	I0729 16:57:48.619345   19312 pod_ready.go:81] duration metric: took 25.004722524s for pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.619356   19312 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-433102" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.623647   19312 pod_ready.go:92] pod "etcd-addons-433102" in "kube-system" namespace has status "Ready":"True"
	I0729 16:57:48.623667   19312 pod_ready.go:81] duration metric: took 4.304122ms for pod "etcd-addons-433102" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.623677   19312 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-433102" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.627978   19312 pod_ready.go:92] pod "kube-apiserver-addons-433102" in "kube-system" namespace has status "Ready":"True"
	I0729 16:57:48.627994   19312 pod_ready.go:81] duration metric: took 4.309385ms for pod "kube-apiserver-addons-433102" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.628004   19312 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-433102" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.632988   19312 pod_ready.go:92] pod "kube-controller-manager-addons-433102" in "kube-system" namespace has status "Ready":"True"
	I0729 16:57:48.633006   19312 pod_ready.go:81] duration metric: took 4.994019ms for pod "kube-controller-manager-addons-433102" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.633018   19312 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6wcxr" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.638373   19312 pod_ready.go:92] pod "kube-proxy-6wcxr" in "kube-system" namespace has status "Ready":"True"
	I0729 16:57:48.638392   19312 pod_ready.go:81] duration metric: took 5.367654ms for pod "kube-proxy-6wcxr" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.638403   19312 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-433102" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.777331   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:48.905683   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:49.019228   19312 pod_ready.go:92] pod "kube-scheduler-addons-433102" in "kube-system" namespace has status "Ready":"True"
	I0729 16:57:49.019256   19312 pod_ready.go:81] duration metric: took 380.843864ms for pod "kube-scheduler-addons-433102" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:49.019270   19312 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-w9bhg" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:49.113500   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:49.376931   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:49.402071   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:49.417970   19312 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-w9bhg" in "kube-system" namespace has status "Ready":"True"
	I0729 16:57:49.417991   19312 pod_ready.go:81] duration metric: took 398.711328ms for pod "nvidia-device-plugin-daemonset-w9bhg" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:49.418008   19312 pod_ready.go:38] duration metric: took 38.365617846s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 16:57:49.418025   19312 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:57:49.418076   19312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:57:49.451649   19312 api_server.go:72] duration metric: took 42.640369496s to wait for apiserver process to appear ...
	I0729 16:57:49.451671   19312 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:57:49.451689   19312 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8443/healthz ...
	I0729 16:57:49.455914   19312 api_server.go:279] https://192.168.39.73:8443/healthz returned 200:
	ok
	I0729 16:57:49.457137   19312 api_server.go:141] control plane version: v1.30.3
	I0729 16:57:49.457160   19312 api_server.go:131] duration metric: took 5.483086ms to wait for apiserver health ...
	I0729 16:57:49.457168   19312 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 16:57:49.611937   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:49.624995   19312 system_pods.go:59] 18 kube-system pods found
	I0729 16:57:49.625019   19312 system_pods.go:61] "coredns-7db6d8ff4d-chxlc" [13483151-7a93-4b7e-bc8a-a0df4c049a67] Running
	I0729 16:57:49.625026   19312 system_pods.go:61] "csi-hostpath-attacher-0" [2c1c2c8c-4978-4a46-9e3b-dd66cdeeb31d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 16:57:49.625032   19312 system_pods.go:61] "csi-hostpath-resizer-0" [70844275-2cb5-4ef3-81cb-5e638a9d1107] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0729 16:57:49.625040   19312 system_pods.go:61] "csi-hostpathplugin-v9jld" [c81085b2-ef2e-48d1-b265-1becf684440b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 16:57:49.625044   19312 system_pods.go:61] "etcd-addons-433102" [06021977-6eba-44af-9f49-543aa605fdcd] Running
	I0729 16:57:49.625048   19312 system_pods.go:61] "kube-apiserver-addons-433102" [a737c877-f452-4e11-8665-567d05e884a3] Running
	I0729 16:57:49.625051   19312 system_pods.go:61] "kube-controller-manager-addons-433102" [7551355b-d9b5-4d57-b372-afbaadbd14fc] Running
	I0729 16:57:49.625054   19312 system_pods.go:61] "kube-ingress-dns-minikube" [e7277800-f99a-44f9-8098-4c1bf978bf95] Running
	I0729 16:57:49.625057   19312 system_pods.go:61] "kube-proxy-6wcxr" [508ba4dd-e6d5-438e-a66c-0188b555f367] Running
	I0729 16:57:49.625060   19312 system_pods.go:61] "kube-scheduler-addons-433102" [617259cb-04ad-4c62-99e8-b71aeb4ef8c3] Running
	I0729 16:57:49.625064   19312 system_pods.go:61] "metrics-server-c59844bb4-fdwdm" [377d84f1-430a-423a-8e08-3ffc0e083b56] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 16:57:49.625067   19312 system_pods.go:61] "nvidia-device-plugin-daemonset-w9bhg" [56c0414f-7d09-4189-9d58-7fc65a0d5eb8] Running
	I0729 16:57:49.625070   19312 system_pods.go:61] "registry-656c9c8d9c-bz6n2" [61225496-6f2a-48fa-b4f8-eab75fc915ba] Running
	I0729 16:57:49.625073   19312 system_pods.go:61] "registry-proxy-wnpcd" [5728a955-abcb-481c-8e81-300240983718] Running
	I0729 16:57:49.625077   19312 system_pods.go:61] "snapshot-controller-745499f584-9x5dq" [35e5ddb5-9e5a-4719-9b39-28d96d5b035a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 16:57:49.625082   19312 system_pods.go:61] "snapshot-controller-745499f584-hkqrc" [2efba456-3d43-4aa8-8262-f2a98c962296] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 16:57:49.625087   19312 system_pods.go:61] "storage-provisioner" [bb738aeb-40ec-47f1-9422-8c2a64cb1b38] Running
	I0729 16:57:49.625090   19312 system_pods.go:61] "tiller-deploy-6677d64bcd-dvkm9" [8c867f82-b890-4ac8-aa2d-74386a1f3bdb] Running
	I0729 16:57:49.625094   19312 system_pods.go:74] duration metric: took 167.922433ms to wait for pod list to return data ...
	I0729 16:57:49.625100   19312 default_sa.go:34] waiting for default service account to be created ...
	I0729 16:57:49.776690   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:49.818000   19312 default_sa.go:45] found service account: "default"
	I0729 16:57:49.818021   19312 default_sa.go:55] duration metric: took 192.915569ms for default service account to be created ...
	I0729 16:57:49.818028   19312 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 16:57:49.902832   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:50.029280   19312 system_pods.go:86] 18 kube-system pods found
	I0729 16:57:50.029311   19312 system_pods.go:89] "coredns-7db6d8ff4d-chxlc" [13483151-7a93-4b7e-bc8a-a0df4c049a67] Running
	I0729 16:57:50.029324   19312 system_pods.go:89] "csi-hostpath-attacher-0" [2c1c2c8c-4978-4a46-9e3b-dd66cdeeb31d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 16:57:50.029335   19312 system_pods.go:89] "csi-hostpath-resizer-0" [70844275-2cb5-4ef3-81cb-5e638a9d1107] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0729 16:57:50.029346   19312 system_pods.go:89] "csi-hostpathplugin-v9jld" [c81085b2-ef2e-48d1-b265-1becf684440b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 16:57:50.029354   19312 system_pods.go:89] "etcd-addons-433102" [06021977-6eba-44af-9f49-543aa605fdcd] Running
	I0729 16:57:50.029365   19312 system_pods.go:89] "kube-apiserver-addons-433102" [a737c877-f452-4e11-8665-567d05e884a3] Running
	I0729 16:57:50.029371   19312 system_pods.go:89] "kube-controller-manager-addons-433102" [7551355b-d9b5-4d57-b372-afbaadbd14fc] Running
	I0729 16:57:50.029382   19312 system_pods.go:89] "kube-ingress-dns-minikube" [e7277800-f99a-44f9-8098-4c1bf978bf95] Running
	I0729 16:57:50.029387   19312 system_pods.go:89] "kube-proxy-6wcxr" [508ba4dd-e6d5-438e-a66c-0188b555f367] Running
	I0729 16:57:50.029393   19312 system_pods.go:89] "kube-scheduler-addons-433102" [617259cb-04ad-4c62-99e8-b71aeb4ef8c3] Running
	I0729 16:57:50.029406   19312 system_pods.go:89] "metrics-server-c59844bb4-fdwdm" [377d84f1-430a-423a-8e08-3ffc0e083b56] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 16:57:50.029416   19312 system_pods.go:89] "nvidia-device-plugin-daemonset-w9bhg" [56c0414f-7d09-4189-9d58-7fc65a0d5eb8] Running
	I0729 16:57:50.029428   19312 system_pods.go:89] "registry-656c9c8d9c-bz6n2" [61225496-6f2a-48fa-b4f8-eab75fc915ba] Running
	I0729 16:57:50.029435   19312 system_pods.go:89] "registry-proxy-wnpcd" [5728a955-abcb-481c-8e81-300240983718] Running
	I0729 16:57:50.029446   19312 system_pods.go:89] "snapshot-controller-745499f584-9x5dq" [35e5ddb5-9e5a-4719-9b39-28d96d5b035a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 16:57:50.029458   19312 system_pods.go:89] "snapshot-controller-745499f584-hkqrc" [2efba456-3d43-4aa8-8262-f2a98c962296] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 16:57:50.029471   19312 system_pods.go:89] "storage-provisioner" [bb738aeb-40ec-47f1-9422-8c2a64cb1b38] Running
	I0729 16:57:50.029481   19312 system_pods.go:89] "tiller-deploy-6677d64bcd-dvkm9" [8c867f82-b890-4ac8-aa2d-74386a1f3bdb] Running
	I0729 16:57:50.029491   19312 system_pods.go:126] duration metric: took 211.456472ms to wait for k8s-apps to be running ...
	I0729 16:57:50.029501   19312 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 16:57:50.029545   19312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 16:57:50.066855   19312 system_svc.go:56] duration metric: took 37.344862ms WaitForService to wait for kubelet
	I0729 16:57:50.066890   19312 kubeadm.go:582] duration metric: took 43.255612143s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:57:50.066924   19312 node_conditions.go:102] verifying NodePressure condition ...
	I0729 16:57:50.113137   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:50.220565   19312 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 16:57:50.220600   19312 node_conditions.go:123] node cpu capacity is 2
	I0729 16:57:50.220616   19312 node_conditions.go:105] duration metric: took 153.68561ms to run NodePressure ...
	I0729 16:57:50.220632   19312 start.go:241] waiting for startup goroutines ...
	I0729 16:57:50.277341   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:50.404091   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:50.618337   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:50.778404   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:50.903526   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:51.113705   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:51.277681   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:51.403284   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:51.613472   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:51.777435   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:51.903174   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:52.113546   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:52.277321   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:52.407831   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:52.612931   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:52.777040   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:52.902990   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:53.115382   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:53.277242   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:53.402951   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:53.612944   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:53.777708   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:53.911597   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:54.113632   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:54.277715   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:54.403341   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:54.612599   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:54.777333   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:54.922509   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:55.113042   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:55.276932   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:55.403023   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:55.613653   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:55.778091   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:55.903112   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:56.116994   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:56.277507   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:56.403772   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:56.612287   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:56.778207   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:56.903411   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:57.114557   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:57.281637   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:57.403018   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:57.614772   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:57.777964   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:57.902724   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:58.113413   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:58.277957   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:58.403072   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:58.612596   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:58.777511   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:58.903483   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:59.125122   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:59.278399   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:59.403341   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:59.612740   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:59.777024   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:59.902675   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:00.111543   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:00.277170   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:00.402625   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:00.611969   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:00.780555   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:00.904014   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:01.113352   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:01.279201   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:01.402753   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:01.612615   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:01.777524   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:01.903226   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:02.116390   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:02.564907   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:02.741072   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:02.741396   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:02.777350   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:02.903682   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:03.112919   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:03.283207   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:03.402190   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:03.615514   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:03.777326   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:03.905402   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:04.112814   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:04.278043   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:04.409764   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:04.612755   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:04.783699   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:04.902226   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:05.120576   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:05.277637   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:05.412297   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:05.643919   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:05.777089   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:05.903430   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:06.117111   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:06.277410   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:06.402662   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:06.621146   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:06.780092   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:06.904568   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:07.113183   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:07.277276   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:07.403679   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:07.612526   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:07.780582   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:07.903396   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:08.116585   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:08.283785   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:08.404554   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:08.613383   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:08.778535   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:08.910205   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:09.114636   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:09.277358   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:09.403280   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:09.612948   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:09.777293   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:09.903206   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:10.113470   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:10.278433   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:10.403658   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:10.612369   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:10.777416   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:10.904894   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:11.115286   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:11.277859   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:11.403459   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:11.612829   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:11.777687   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:11.903521   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:12.112192   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:12.278162   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:12.405075   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:12.612372   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:12.779487   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:12.903135   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:13.114043   19312 kapi.go:107] duration metric: took 56.507089151s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 16:58:13.277163   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:13.402972   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:13.777091   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:13.904813   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:14.277454   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:14.403230   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:14.777283   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:14.903661   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:15.277560   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:15.402902   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:15.777544   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:15.903483   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:16.277174   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:16.403371   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:16.776743   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:16.902900   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:17.277772   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:17.402189   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:17.781333   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:17.903250   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:18.277295   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:18.403332   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:18.777227   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:18.903916   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:19.280730   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:19.402847   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:19.777772   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:19.902599   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:20.277347   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:20.403808   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:20.778243   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:20.903474   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:21.276942   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:21.405117   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:21.777924   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:21.903401   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:22.277721   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:22.402721   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:22.825116   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:22.903106   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:23.277059   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:23.402689   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:23.777830   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:23.902687   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:24.277892   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:24.402368   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:24.781025   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:24.902844   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:25.277628   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:25.403544   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:25.777472   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:25.903522   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:26.277469   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:26.403361   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:26.777150   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:26.903707   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:27.277984   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:27.402771   19312 kapi.go:107] duration metric: took 1m12.004283711s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 16:58:27.777320   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:28.281297   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:28.778156   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:29.278104   19312 kapi.go:107] duration metric: took 1m11.004421424s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 16:58:29.279836   19312 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-433102 cluster.
	I0729 16:58:29.281160   19312 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 16:58:29.282330   19312 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 16:58:29.283554   19312 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, default-storageclass, storage-provisioner, storage-provisioner-rancher, metrics-server, helm-tiller, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0729 16:58:29.284860   19312 addons.go:510] duration metric: took 1m22.473543206s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns default-storageclass storage-provisioner storage-provisioner-rancher metrics-server helm-tiller inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0729 16:58:29.284901   19312 start.go:246] waiting for cluster config update ...
	I0729 16:58:29.284918   19312 start.go:255] writing updated cluster config ...
	I0729 16:58:29.285143   19312 ssh_runner.go:195] Run: rm -f paused
	I0729 16:58:29.332555   19312 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 16:58:29.334470   19312 out.go:177] * Done! kubectl is now configured to use "addons-433102" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.496439819Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722272528496413633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4d32a5d-dedd-4edb-acae-1101728e76f2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.497134669Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aff013b4-7c77-4972-beef-c88c0ccb75dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.497352345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aff013b4-7c77-4972-beef-c88c0ccb75dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.497634959Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f40ae879569d775a79f117bd872c259ff93223ced84a0e688802eab6411d8c7,PodSandboxId:3570749d9d79d684051602d352dba7823388fd99b5f9878334cbc6d894862f43,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722272519487084714,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-cz2bv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 773dada2-8958-460d-b4f8-53d9981e74ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2be7a527,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552417b813e20a7ce35e6fcea96e06ad05be03de1a0bd835a9bf6528b1b97ed0,PodSandboxId:14569a4d3adaa424921ec1367e09008c4a61d82b4d4420f28aa901a13496fca1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722272380866463815,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bba16d61-afc5-4c02-85a7-8e1181099d91,},Annotations:map[string]string{io.kubernet
es.container.hash: da8d9711,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:168151c1371e9807e41e98e386b805a9c081e1884f375f772acd59461fd1e4e1,PodSandboxId:ad0779037f7870e83416e7b3d9c156a46b69c43805d64302b606f8dd75df6fa3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722272311011477451,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e3ae3c83-a5c9-4ac7-8
e5e-89b7df19295c,},Annotations:map[string]string{io.kubernetes.container.hash: 9aacfe94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f05ed99aac59f862e4fa68ec9a7cc203e689d314aef7acd1ce66000468e98f7,PodSandboxId:94aa128b95fdd22d539586659e5ac7a7ecb8e5c15f25c9cbdc8ee07a83c0048a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722272279179726794,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hwdl9,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 6dd37089-7382-4c1f-976d-fd39b4d60eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 3b96f1fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e3723967067fd8c4aff73427ece3579d27b86c225b1a8485d140c46bce1f89a,PodSandboxId:56aabf01c114c64b76e1249bcaa2568327634a01e5757e8d80952c9bb31b476a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722272279057899741,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4cfgf,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3cafb5cf-a3ce-4182-bfcb-861cb20d8d31,},Annotations:map[string]string{io.kubernetes.container.hash: cdeb40e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba0974d77ed6cae9390e642d6962087e3c80b050cf6bfa114fa6593bde64aee7,PodSandboxId:788563bf38e22d42ff70445a3e7dc5ca86356221a9c50ccbe097c161a570db36,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722272261121646806,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-fdwdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377d84f1-430a-423a-8e08-3ffc0e083b56,},Annotations:map[string]string{io.kubernetes.container.hash: b69a97df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2599cbcd1abdc6363e42cd84b81942c2062fb81ff763fe53dd406df7addc2b42,PodSandboxId:3f828acf7d097497af16440cc6cd07ae40d2bc608a845a408412fdda28abb0c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722272233545936328,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb738aeb-40ec-47f1-9422-8c2a64cb1b38,},Annotations:map[string]string{io.kubernetes.container.hash: 5db4b699,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36bd6adcb73e3db92ab8716fd9db0d1c3a693fba74911af5ad6739dd21be75cb,PodSandboxId:a7d20bcb6427eaaa99e2a808005690bbce7f36eb60d7bb3b5b7689203d9e1cc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722272231701006614,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-chxlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13483151-7a93-4b7e-bc8a-a0df4c049a67,},Annotations:map[string]string{io.kubernetes.container.hash: 82a81137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a55b409fab4e27959260804b6052797f442f48d1411d6f5b444548fa1720f7d,PodSandboxId:8fbf004632d3aa84babf678ad0864bf270121c283fcfba5f4aa1a1c73f779ca9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722272229216967769,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6wcxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 508ba4dd-e6d5-438e-a66c-0188b555f367,},Annotations:map[string]string{io.kubernetes.container.hash: 4e980fe0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316657090f5cffcd0503b91d427543c65807dac517373699e629c60a47444356,PodSandboxId:a22c8ef147418ae3b7b41984990582f394249928eb39a2cb517dbe663f43fbb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722272208145849572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b3873e1eb7772d5b00a12b153cb28c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79298bbe1b2337e09867ac114b4875e36699d0eb1ba5b9725b468712ac570005,PodSandboxId:1a0140bf9d63825f6e047dd86d15a682e5e28597567507432eedafaa1e785527,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722272208173742099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2280a372007a9e99150f8ed8e7385ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654027072dc843ca69f02c3c234a1eae4c56b9d9447ece343121a56ed3166d37,PodSandboxId:bf49bf99b816bf5d51496123adba38127de234b880313dc7cbb09e625c7b0906,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722272208088937677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b994742abe79d480c8f0ba290e51e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8aba7843,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e54d28c2754d2aa11d61943c6e860b690ffc023075a34001580b78070aae803,PodSandboxId:c7bea62da2e92e649f0efd165cccb1c468a357414fbcbbb361e1db55f1f4bdcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a
964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722272208090472348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58abe85a975931c71d2ced52e3a7744c,},Annotations:map[string]string{io.kubernetes.container.hash: 91dad007,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aff013b4-7c77-4972-beef-c88c0ccb75dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.536583665Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ddeefd1-d532-483f-ab78-7f74d66e8893 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.536654214Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ddeefd1-d532-483f-ab78-7f74d66e8893 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.537868291Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53a6a50d-3571-4b0c-8308-8fcca9fac408 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.539378791Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722272528539302646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53a6a50d-3571-4b0c-8308-8fcca9fac408 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.539919207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5a7e9ab-063b-45ae-bfc0-302ec62e2401 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.539976930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5a7e9ab-063b-45ae-bfc0-302ec62e2401 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.540303987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f40ae879569d775a79f117bd872c259ff93223ced84a0e688802eab6411d8c7,PodSandboxId:3570749d9d79d684051602d352dba7823388fd99b5f9878334cbc6d894862f43,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722272519487084714,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-cz2bv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 773dada2-8958-460d-b4f8-53d9981e74ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2be7a527,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552417b813e20a7ce35e6fcea96e06ad05be03de1a0bd835a9bf6528b1b97ed0,PodSandboxId:14569a4d3adaa424921ec1367e09008c4a61d82b4d4420f28aa901a13496fca1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722272380866463815,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bba16d61-afc5-4c02-85a7-8e1181099d91,},Annotations:map[string]string{io.kubernet
es.container.hash: da8d9711,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:168151c1371e9807e41e98e386b805a9c081e1884f375f772acd59461fd1e4e1,PodSandboxId:ad0779037f7870e83416e7b3d9c156a46b69c43805d64302b606f8dd75df6fa3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722272311011477451,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e3ae3c83-a5c9-4ac7-8
e5e-89b7df19295c,},Annotations:map[string]string{io.kubernetes.container.hash: 9aacfe94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f05ed99aac59f862e4fa68ec9a7cc203e689d314aef7acd1ce66000468e98f7,PodSandboxId:94aa128b95fdd22d539586659e5ac7a7ecb8e5c15f25c9cbdc8ee07a83c0048a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722272279179726794,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hwdl9,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 6dd37089-7382-4c1f-976d-fd39b4d60eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 3b96f1fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e3723967067fd8c4aff73427ece3579d27b86c225b1a8485d140c46bce1f89a,PodSandboxId:56aabf01c114c64b76e1249bcaa2568327634a01e5757e8d80952c9bb31b476a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722272279057899741,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4cfgf,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3cafb5cf-a3ce-4182-bfcb-861cb20d8d31,},Annotations:map[string]string{io.kubernetes.container.hash: cdeb40e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba0974d77ed6cae9390e642d6962087e3c80b050cf6bfa114fa6593bde64aee7,PodSandboxId:788563bf38e22d42ff70445a3e7dc5ca86356221a9c50ccbe097c161a570db36,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722272261121646806,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-fdwdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377d84f1-430a-423a-8e08-3ffc0e083b56,},Annotations:map[string]string{io.kubernetes.container.hash: b69a97df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2599cbcd1abdc6363e42cd84b81942c2062fb81ff763fe53dd406df7addc2b42,PodSandboxId:3f828acf7d097497af16440cc6cd07ae40d2bc608a845a408412fdda28abb0c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722272233545936328,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb738aeb-40ec-47f1-9422-8c2a64cb1b38,},Annotations:map[string]string{io.kubernetes.container.hash: 5db4b699,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36bd6adcb73e3db92ab8716fd9db0d1c3a693fba74911af5ad6739dd21be75cb,PodSandboxId:a7d20bcb6427eaaa99e2a808005690bbce7f36eb60d7bb3b5b7689203d9e1cc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722272231701006614,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-chxlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13483151-7a93-4b7e-bc8a-a0df4c049a67,},Annotations:map[string]string{io.kubernetes.container.hash: 82a81137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a55b409fab4e27959260804b6052797f442f48d1411d6f5b444548fa1720f7d,PodSandboxId:8fbf004632d3aa84babf678ad0864bf270121c283fcfba5f4aa1a1c73f779ca9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722272229216967769,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6wcxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 508ba4dd-e6d5-438e-a66c-0188b555f367,},Annotations:map[string]string{io.kubernetes.container.hash: 4e980fe0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316657090f5cffcd0503b91d427543c65807dac517373699e629c60a47444356,PodSandboxId:a22c8ef147418ae3b7b41984990582f394249928eb39a2cb517dbe663f43fbb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722272208145849572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b3873e1eb7772d5b00a12b153cb28c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79298bbe1b2337e09867ac114b4875e36699d0eb1ba5b9725b468712ac570005,PodSandboxId:1a0140bf9d63825f6e047dd86d15a682e5e28597567507432eedafaa1e785527,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722272208173742099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2280a372007a9e99150f8ed8e7385ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654027072dc843ca69f02c3c234a1eae4c56b9d9447ece343121a56ed3166d37,PodSandboxId:bf49bf99b816bf5d51496123adba38127de234b880313dc7cbb09e625c7b0906,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722272208088937677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b994742abe79d480c8f0ba290e51e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8aba7843,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e54d28c2754d2aa11d61943c6e860b690ffc023075a34001580b78070aae803,PodSandboxId:c7bea62da2e92e649f0efd165cccb1c468a357414fbcbbb361e1db55f1f4bdcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a
964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722272208090472348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58abe85a975931c71d2ced52e3a7744c,},Annotations:map[string]string{io.kubernetes.container.hash: 91dad007,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5a7e9ab-063b-45ae-bfc0-302ec62e2401 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.579698965Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31c281fc-0cce-4f72-a45f-1dcda1c63b91 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.579770758Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31c281fc-0cce-4f72-a45f-1dcda1c63b91 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.581054764Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d96295c6-e031-4291-ae0a-a8d0182092d5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.582561421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722272528582536147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d96295c6-e031-4291-ae0a-a8d0182092d5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.583191682Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e59c254-5ff4-4f79-86cd-5f74e4eb5001 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.583268221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e59c254-5ff4-4f79-86cd-5f74e4eb5001 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.583581378Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f40ae879569d775a79f117bd872c259ff93223ced84a0e688802eab6411d8c7,PodSandboxId:3570749d9d79d684051602d352dba7823388fd99b5f9878334cbc6d894862f43,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722272519487084714,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-cz2bv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 773dada2-8958-460d-b4f8-53d9981e74ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2be7a527,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552417b813e20a7ce35e6fcea96e06ad05be03de1a0bd835a9bf6528b1b97ed0,PodSandboxId:14569a4d3adaa424921ec1367e09008c4a61d82b4d4420f28aa901a13496fca1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722272380866463815,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bba16d61-afc5-4c02-85a7-8e1181099d91,},Annotations:map[string]string{io.kubernet
es.container.hash: da8d9711,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:168151c1371e9807e41e98e386b805a9c081e1884f375f772acd59461fd1e4e1,PodSandboxId:ad0779037f7870e83416e7b3d9c156a46b69c43805d64302b606f8dd75df6fa3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722272311011477451,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e3ae3c83-a5c9-4ac7-8
e5e-89b7df19295c,},Annotations:map[string]string{io.kubernetes.container.hash: 9aacfe94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f05ed99aac59f862e4fa68ec9a7cc203e689d314aef7acd1ce66000468e98f7,PodSandboxId:94aa128b95fdd22d539586659e5ac7a7ecb8e5c15f25c9cbdc8ee07a83c0048a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722272279179726794,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hwdl9,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 6dd37089-7382-4c1f-976d-fd39b4d60eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 3b96f1fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e3723967067fd8c4aff73427ece3579d27b86c225b1a8485d140c46bce1f89a,PodSandboxId:56aabf01c114c64b76e1249bcaa2568327634a01e5757e8d80952c9bb31b476a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722272279057899741,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4cfgf,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3cafb5cf-a3ce-4182-bfcb-861cb20d8d31,},Annotations:map[string]string{io.kubernetes.container.hash: cdeb40e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba0974d77ed6cae9390e642d6962087e3c80b050cf6bfa114fa6593bde64aee7,PodSandboxId:788563bf38e22d42ff70445a3e7dc5ca86356221a9c50ccbe097c161a570db36,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722272261121646806,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-fdwdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377d84f1-430a-423a-8e08-3ffc0e083b56,},Annotations:map[string]string{io.kubernetes.container.hash: b69a97df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2599cbcd1abdc6363e42cd84b81942c2062fb81ff763fe53dd406df7addc2b42,PodSandboxId:3f828acf7d097497af16440cc6cd07ae40d2bc608a845a408412fdda28abb0c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722272233545936328,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb738aeb-40ec-47f1-9422-8c2a64cb1b38,},Annotations:map[string]string{io.kubernetes.container.hash: 5db4b699,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36bd6adcb73e3db92ab8716fd9db0d1c3a693fba74911af5ad6739dd21be75cb,PodSandboxId:a7d20bcb6427eaaa99e2a808005690bbce7f36eb60d7bb3b5b7689203d9e1cc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722272231701006614,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-chxlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13483151-7a93-4b7e-bc8a-a0df4c049a67,},Annotations:map[string]string{io.kubernetes.container.hash: 82a81137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a55b409fab4e27959260804b6052797f442f48d1411d6f5b444548fa1720f7d,PodSandboxId:8fbf004632d3aa84babf678ad0864bf270121c283fcfba5f4aa1a1c73f779ca9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722272229216967769,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6wcxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 508ba4dd-e6d5-438e-a66c-0188b555f367,},Annotations:map[string]string{io.kubernetes.container.hash: 4e980fe0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316657090f5cffcd0503b91d427543c65807dac517373699e629c60a47444356,PodSandboxId:a22c8ef147418ae3b7b41984990582f394249928eb39a2cb517dbe663f43fbb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722272208145849572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b3873e1eb7772d5b00a12b153cb28c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79298bbe1b2337e09867ac114b4875e36699d0eb1ba5b9725b468712ac570005,PodSandboxId:1a0140bf9d63825f6e047dd86d15a682e5e28597567507432eedafaa1e785527,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722272208173742099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2280a372007a9e99150f8ed8e7385ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654027072dc843ca69f02c3c234a1eae4c56b9d9447ece343121a56ed3166d37,PodSandboxId:bf49bf99b816bf5d51496123adba38127de234b880313dc7cbb09e625c7b0906,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722272208088937677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b994742abe79d480c8f0ba290e51e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8aba7843,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e54d28c2754d2aa11d61943c6e860b690ffc023075a34001580b78070aae803,PodSandboxId:c7bea62da2e92e649f0efd165cccb1c468a357414fbcbbb361e1db55f1f4bdcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a
964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722272208090472348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58abe85a975931c71d2ced52e3a7744c,},Annotations:map[string]string{io.kubernetes.container.hash: 91dad007,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e59c254-5ff4-4f79-86cd-5f74e4eb5001 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.617934174Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a308ae5b-44c1-49bc-9334-255b0ec52b65 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.618009386Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a308ae5b-44c1-49bc-9334-255b0ec52b65 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.619186968Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e08a9774-cde4-4f72-b066-0160761e63f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.620675275Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722272528620647340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e08a9774-cde4-4f72-b066-0160761e63f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.621229327Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a229e0cc-e7f4-42b8-9bc5-6e679103d5ce name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.621302916Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a229e0cc-e7f4-42b8-9bc5-6e679103d5ce name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:02:08 addons-433102 crio[683]: time="2024-07-29 17:02:08.621620031Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f40ae879569d775a79f117bd872c259ff93223ced84a0e688802eab6411d8c7,PodSandboxId:3570749d9d79d684051602d352dba7823388fd99b5f9878334cbc6d894862f43,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722272519487084714,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-cz2bv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 773dada2-8958-460d-b4f8-53d9981e74ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2be7a527,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552417b813e20a7ce35e6fcea96e06ad05be03de1a0bd835a9bf6528b1b97ed0,PodSandboxId:14569a4d3adaa424921ec1367e09008c4a61d82b4d4420f28aa901a13496fca1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722272380866463815,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bba16d61-afc5-4c02-85a7-8e1181099d91,},Annotations:map[string]string{io.kubernet
es.container.hash: da8d9711,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:168151c1371e9807e41e98e386b805a9c081e1884f375f772acd59461fd1e4e1,PodSandboxId:ad0779037f7870e83416e7b3d9c156a46b69c43805d64302b606f8dd75df6fa3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722272311011477451,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e3ae3c83-a5c9-4ac7-8
e5e-89b7df19295c,},Annotations:map[string]string{io.kubernetes.container.hash: 9aacfe94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f05ed99aac59f862e4fa68ec9a7cc203e689d314aef7acd1ce66000468e98f7,PodSandboxId:94aa128b95fdd22d539586659e5ac7a7ecb8e5c15f25c9cbdc8ee07a83c0048a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722272279179726794,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hwdl9,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 6dd37089-7382-4c1f-976d-fd39b4d60eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 3b96f1fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e3723967067fd8c4aff73427ece3579d27b86c225b1a8485d140c46bce1f89a,PodSandboxId:56aabf01c114c64b76e1249bcaa2568327634a01e5757e8d80952c9bb31b476a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722272279057899741,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4cfgf,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3cafb5cf-a3ce-4182-bfcb-861cb20d8d31,},Annotations:map[string]string{io.kubernetes.container.hash: cdeb40e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba0974d77ed6cae9390e642d6962087e3c80b050cf6bfa114fa6593bde64aee7,PodSandboxId:788563bf38e22d42ff70445a3e7dc5ca86356221a9c50ccbe097c161a570db36,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722272261121646806,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-fdwdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377d84f1-430a-423a-8e08-3ffc0e083b56,},Annotations:map[string]string{io.kubernetes.container.hash: b69a97df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2599cbcd1abdc6363e42cd84b81942c2062fb81ff763fe53dd406df7addc2b42,PodSandboxId:3f828acf7d097497af16440cc6cd07ae40d2bc608a845a408412fdda28abb0c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722272233545936328,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb738aeb-40ec-47f1-9422-8c2a64cb1b38,},Annotations:map[string]string{io.kubernetes.container.hash: 5db4b699,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36bd6adcb73e3db92ab8716fd9db0d1c3a693fba74911af5ad6739dd21be75cb,PodSandboxId:a7d20bcb6427eaaa99e2a808005690bbce7f36eb60d7bb3b5b7689203d9e1cc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722272231701006614,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-chxlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13483151-7a93-4b7e-bc8a-a0df4c049a67,},Annotations:map[string]string{io.kubernetes.container.hash: 82a81137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a55b409fab4e27959260804b6052797f442f48d1411d6f5b444548fa1720f7d,PodSandboxId:8fbf004632d3aa84babf678ad0864bf270121c283fcfba5f4aa1a1c73f779ca9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722272229216967769,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6wcxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 508ba4dd-e6d5-438e-a66c-0188b555f367,},Annotations:map[string]string{io.kubernetes.container.hash: 4e980fe0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316657090f5cffcd0503b91d427543c65807dac517373699e629c60a47444356,PodSandboxId:a22c8ef147418ae3b7b41984990582f394249928eb39a2cb517dbe663f43fbb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722272208145849572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b3873e1eb7772d5b00a12b153cb28c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79298bbe1b2337e09867ac114b4875e36699d0eb1ba5b9725b468712ac570005,PodSandboxId:1a0140bf9d63825f6e047dd86d15a682e5e28597567507432eedafaa1e785527,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722272208173742099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2280a372007a9e99150f8ed8e7385ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654027072dc843ca69f02c3c234a1eae4c56b9d9447ece343121a56ed3166d37,PodSandboxId:bf49bf99b816bf5d51496123adba38127de234b880313dc7cbb09e625c7b0906,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722272208088937677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b994742abe79d480c8f0ba290e51e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8aba7843,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e54d28c2754d2aa11d61943c6e860b690ffc023075a34001580b78070aae803,PodSandboxId:c7bea62da2e92e649f0efd165cccb1c468a357414fbcbbb361e1db55f1f4bdcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a
964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722272208090472348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58abe85a975931c71d2ced52e3a7744c,},Annotations:map[string]string{io.kubernetes.container.hash: 91dad007,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a229e0cc-e7f4-42b8-9bc5-6e679103d5ce name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1f40ae879569d       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app           0                   3570749d9d79d       hello-world-app-6778b5fc9f-cz2bv
	552417b813e20       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   14569a4d3adaa       nginx
	168151c1371e9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   ad0779037f787       busybox
	6f05ed99aac59       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              patch                     0                   94aa128b95fdd       ingress-nginx-admission-patch-hwdl9
	0e3723967067f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   56aabf01c114c       ingress-nginx-admission-create-4cfgf
	ba0974d77ed6c       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   788563bf38e22       metrics-server-c59844bb4-fdwdm
	2599cbcd1abdc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   3f828acf7d097       storage-provisioner
	36bd6adcb73e3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   a7d20bcb6427e       coredns-7db6d8ff4d-chxlc
	2a55b409fab4e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             4 minutes ago       Running             kube-proxy                0                   8fbf004632d3a       kube-proxy-6wcxr
	79298bbe1b233       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             5 minutes ago       Running             kube-scheduler            0                   1a0140bf9d638       kube-scheduler-addons-433102
	316657090f5cf       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             5 minutes ago       Running             kube-controller-manager   0                   a22c8ef147418       kube-controller-manager-addons-433102
	4e54d28c2754d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             5 minutes ago       Running             kube-apiserver            0                   c7bea62da2e92       kube-apiserver-addons-433102
	654027072dc84       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago       Running             etcd                      0                   bf49bf99b816b       etcd-addons-433102
	
	
	==> coredns [36bd6adcb73e3db92ab8716fd9db0d1c3a693fba74911af5ad6739dd21be75cb] <==
	Trace[2142447518]: [30.000648548s] [30.000648548s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[577238642]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 16:57:12.762) (total time: 30012ms):
	Trace[577238642]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30012ms (16:57:42.774)
	Trace[577238642]: [30.012315349s] [30.012315349s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[155335818]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 16:57:12.773) (total time: 30001ms):
	Trace[155335818]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (16:57:42.774)
	Trace[155335818]: [30.00149739s] [30.00149739s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 10.244.0.22:37638 - 1625 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000171656s
	[INFO] 10.244.0.22:43559 - 62030 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000370461s
	[INFO] 10.244.0.22:49818 - 43448 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000092003s
	[INFO] 10.244.0.22:59059 - 54944 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000383853s
	[INFO] 10.244.0.22:48824 - 4755 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000093734s
	[INFO] 10.244.0.22:45350 - 10134 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000056265s
	[INFO] 10.244.0.22:35202 - 347 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001346528s
	[INFO] 10.244.0.22:60289 - 25412 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001438966s
	[INFO] 10.244.0.24:58940 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000586437s
	[INFO] 10.244.0.24:42852 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000155432s
	
	
	==> describe nodes <==
	Name:               addons-433102
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-433102
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=addons-433102
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T16_56_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-433102
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 16:56:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-433102
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:02:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:00:27 +0000   Mon, 29 Jul 2024 16:56:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:00:27 +0000   Mon, 29 Jul 2024 16:56:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:00:27 +0000   Mon, 29 Jul 2024 16:56:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:00:27 +0000   Mon, 29 Jul 2024 16:56:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    addons-433102
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac35226c0ae2487b829b216aeb471bfb
	  System UUID:                ac35226c-0ae2-487b-829b-216aeb471bfb
	  Boot ID:                    2cf79d73-3d23-4b77-9315-61b82db51e3e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  default                     hello-world-app-6778b5fc9f-cz2bv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 coredns-7db6d8ff4d-chxlc                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m1s
	  kube-system                 etcd-addons-433102                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-apiserver-addons-433102             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-controller-manager-addons-433102    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-proxy-6wcxr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-scheduler-addons-433102             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 metrics-server-c59844bb4-fdwdm           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m56s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m58s  kube-proxy       
	  Normal  Starting                 5m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m15s  kubelet          Node addons-433102 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m15s  kubelet          Node addons-433102 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m15s  kubelet          Node addons-433102 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m14s  kubelet          Node addons-433102 status is now: NodeReady
	  Normal  RegisteredNode           5m2s   node-controller  Node addons-433102 event: Registered Node addons-433102 in Controller
	
	
	==> dmesg <==
	[  +0.142258] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.098887] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.120680] kauditd_printk_skb: 161 callbacks suppressed
	[  +5.853141] kauditd_printk_skb: 64 callbacks suppressed
	[  +7.822274] kauditd_printk_skb: 5 callbacks suppressed
	[ +10.788167] kauditd_printk_skb: 13 callbacks suppressed
	[ +12.129345] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.214772] kauditd_printk_skb: 12 callbacks suppressed
	[Jul29 16:58] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.542734] kauditd_printk_skb: 78 callbacks suppressed
	[ +14.450806] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.667528] kauditd_printk_skb: 57 callbacks suppressed
	[ +21.900388] kauditd_printk_skb: 6 callbacks suppressed
	[Jul29 16:59] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.078951] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.065817] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.508316] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.060702] kauditd_printk_skb: 21 callbacks suppressed
	[  +7.985343] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.875915] kauditd_printk_skb: 31 callbacks suppressed
	[  +9.159485] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.401675] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.243275] kauditd_printk_skb: 24 callbacks suppressed
	[Jul29 17:01] kauditd_printk_skb: 2 callbacks suppressed
	[Jul29 17:02] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [654027072dc843ca69f02c3c234a1eae4c56b9d9447ece343121a56ed3166d37] <==
	{"level":"info","ts":"2024-07-29T16:58:02.706714Z","caller":"traceutil/trace.go:171","msg":"trace[1916380899] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1022; }","duration":"333.674569ms","start":"2024-07-29T16:58:02.373029Z","end":"2024-07-29T16:58:02.706703Z","steps":["trace[1916380899] 'agreement among raft nodes before linearized reading'  (duration: 332.02728ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T16:58:02.70687Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T16:58:02.373016Z","time spent":"333.8383ms","remote":"127.0.0.1:39452","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14375,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-07-29T16:58:22.794159Z","caller":"traceutil/trace.go:171","msg":"trace[1000447906] transaction","detail":"{read_only:false; response_revision:1133; number_of_response:1; }","duration":"225.707144ms","start":"2024-07-29T16:58:22.568422Z","end":"2024-07-29T16:58:22.794129Z","steps":["trace[1000447906] 'process raft request'  (duration: 225.281432ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T16:58:23.039622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.803419ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-fdwdm.17e6bd7f99ef4ba8\" ","response":"range_response_count:1 size:813"}
	{"level":"warn","ts":"2024-07-29T16:58:23.039662Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.651373ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T16:58:23.039671Z","caller":"traceutil/trace.go:171","msg":"trace[1839948077] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-c59844bb4-fdwdm.17e6bd7f99ef4ba8; range_end:; response_count:1; response_revision:1133; }","duration":"108.890307ms","start":"2024-07-29T16:58:22.930768Z","end":"2024-07-29T16:58:23.039658Z","steps":["trace[1839948077] 'range keys from in-memory index tree'  (duration: 108.658222ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T16:58:23.039698Z","caller":"traceutil/trace.go:171","msg":"trace[2012143020] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1133; }","duration":"103.751544ms","start":"2024-07-29T16:58:22.935938Z","end":"2024-07-29T16:58:23.039689Z","steps":["trace[2012143020] 'range keys from in-memory index tree'  (duration: 103.612834ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T16:58:59.750646Z","caller":"traceutil/trace.go:171","msg":"trace[328429338] transaction","detail":"{read_only:false; response_revision:1335; number_of_response:1; }","duration":"141.153601ms","start":"2024-07-29T16:58:59.609468Z","end":"2024-07-29T16:58:59.750622Z","steps":["trace[328429338] 'process raft request'  (duration: 140.904059ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T16:59:22.179164Z","caller":"traceutil/trace.go:171","msg":"trace[1576085642] transaction","detail":"{read_only:false; response_revision:1495; number_of_response:1; }","duration":"316.163063ms","start":"2024-07-29T16:59:21.862945Z","end":"2024-07-29T16:59:22.179108Z","steps":["trace[1576085642] 'process raft request'  (duration: 316.053636ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T16:59:22.179259Z","caller":"traceutil/trace.go:171","msg":"trace[1362362908] linearizableReadLoop","detail":"{readStateIndex:1551; appliedIndex:1551; }","duration":"287.857329ms","start":"2024-07-29T16:59:21.891386Z","end":"2024-07-29T16:59:22.179244Z","steps":["trace[1362362908] 'read index received'  (duration: 287.851825ms)","trace[1362362908] 'applied index is now lower than readState.Index'  (duration: 4.601µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T16:59:22.179389Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"287.978344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-29T16:59:22.179415Z","caller":"traceutil/trace.go:171","msg":"trace[1724860483] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; response_count:0; response_revision:1495; }","duration":"288.050863ms","start":"2024-07-29T16:59:21.891357Z","end":"2024-07-29T16:59:22.179408Z","steps":["trace[1724860483] 'agreement among raft nodes before linearized reading'  (duration: 287.957902ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T16:59:22.179407Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T16:59:21.86293Z","time spent":"316.344989ms","remote":"127.0.0.1:39440","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1480 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-29T16:59:22.1853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.708537ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T16:59:22.185561Z","caller":"traceutil/trace.go:171","msg":"trace[49773128] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1496; }","duration":"275.698143ms","start":"2024-07-29T16:59:21.909766Z","end":"2024-07-29T16:59:22.185464Z","steps":["trace[49773128] 'agreement among raft nodes before linearized reading'  (duration: 273.698836ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T16:59:22.186751Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.584633ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:12134"}
	{"level":"info","ts":"2024-07-29T16:59:22.186884Z","caller":"traceutil/trace.go:171","msg":"trace[166255791] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1496; }","duration":"168.846033ms","start":"2024-07-29T16:59:22.018031Z","end":"2024-07-29T16:59:22.186877Z","steps":["trace[166255791] 'agreement among raft nodes before linearized reading'  (duration: 168.373744ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T16:59:22.187369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"251.845992ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T16:59:22.18748Z","caller":"traceutil/trace.go:171","msg":"trace[2086234027] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1496; }","duration":"251.979298ms","start":"2024-07-29T16:59:21.935492Z","end":"2024-07-29T16:59:22.187471Z","steps":["trace[2086234027] 'agreement among raft nodes before linearized reading'  (duration: 251.851669ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T16:59:40.770712Z","caller":"traceutil/trace.go:171","msg":"trace[2075011209] transaction","detail":"{read_only:false; response_revision:1670; number_of_response:1; }","duration":"290.688029ms","start":"2024-07-29T16:59:40.479988Z","end":"2024-07-29T16:59:40.770676Z","steps":["trace[2075011209] 'process raft request'  (duration: 290.284883ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:00:00.592864Z","caller":"traceutil/trace.go:171","msg":"trace[1762399] linearizableReadLoop","detail":"{readStateIndex:1933; appliedIndex:1932; }","duration":"138.127973ms","start":"2024-07-29T17:00:00.45466Z","end":"2024-07-29T17:00:00.592788Z","steps":["trace[1762399] 'read index received'  (duration: 137.993767ms)","trace[1762399] 'applied index is now lower than readState.Index'  (duration: 133.867µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T17:00:00.593032Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.34442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3607"}
	{"level":"info","ts":"2024-07-29T17:00:00.593085Z","caller":"traceutil/trace.go:171","msg":"trace[201807023] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1862; }","duration":"138.437973ms","start":"2024-07-29T17:00:00.454633Z","end":"2024-07-29T17:00:00.593071Z","steps":["trace[201807023] 'agreement among raft nodes before linearized reading'  (duration: 138.304913ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:00:00.593254Z","caller":"traceutil/trace.go:171","msg":"trace[1438204927] transaction","detail":"{read_only:false; response_revision:1862; number_of_response:1; }","duration":"206.312001ms","start":"2024-07-29T17:00:00.386927Z","end":"2024-07-29T17:00:00.593239Z","steps":["trace[1438204927] 'process raft request'  (duration: 205.771914ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T17:00:32.807108Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T17:00:32.460163Z","time spent":"346.935326ms","remote":"127.0.0.1:39264","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	
	
	==> kernel <==
	 17:02:08 up 5 min,  0 users,  load average: 0.30, 1.25, 0.72
	Linux addons-433102 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4e54d28c2754d2aa11d61943c6e860b690ffc023075a34001580b78070aae803] <==
	I0729 16:58:52.990163       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0729 16:58:52.998621       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0729 16:59:04.282262       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0729 16:59:05.356413       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0729 16:59:29.080019       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0729 16:59:38.136017       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0729 16:59:38.331400       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.113.173"}
	E0729 16:59:42.090755       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0729 16:59:49.706608       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 16:59:49.706895       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 16:59:49.743087       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 16:59:49.743164       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 16:59:49.768500       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 16:59:49.768627       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 16:59:49.797454       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 16:59:49.797528       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 16:59:49.862437       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 16:59:49.862497       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0729 16:59:50.769093       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0729 16:59:50.863740       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0729 16:59:50.879028       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0729 16:59:56.368463       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.115.170"}
	I0729 17:01:58.324089       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.139.192"}
	E0729 17:02:00.042077       1 watch.go:250] http2: stream closed
	E0729 17:02:00.746585       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [316657090f5cffcd0503b91d427543c65807dac517373699e629c60a47444356] <==
	W0729 17:00:43.936520       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:00:43.936699       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:00:54.741473       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:00:54.741598       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:01:05.492979       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:01:05.493042       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:01:09.912625       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:01:09.912739       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:01:27.370241       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:01:27.370340       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:01:40.133065       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:01:40.133299       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:01:46.193192       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:01:46.193364       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 17:01:58.162203       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="43.059549ms"
	I0729 17:01:58.180487       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="17.710437ms"
	I0729 17:01:58.192983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="12.411932ms"
	I0729 17:01:58.193265       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="52.516µs"
	I0729 17:02:00.024440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="21.732992ms"
	I0729 17:02:00.025017       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="77.456µs"
	I0729 17:02:00.659799       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0729 17:02:00.667013       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0729 17:02:00.670616       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="5.134µs"
	W0729 17:02:04.218700       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:02:04.218760       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [2a55b409fab4e27959260804b6052797f442f48d1411d6f5b444548fa1720f7d] <==
	I0729 16:57:09.801024       1 server_linux.go:69] "Using iptables proxy"
	I0729 16:57:09.839484       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.73"]
	I0729 16:57:10.422467       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 16:57:10.422528       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 16:57:10.422546       1 server_linux.go:165] "Using iptables Proxier"
	I0729 16:57:10.622162       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 16:57:10.622360       1 server.go:872] "Version info" version="v1.30.3"
	I0729 16:57:10.622390       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 16:57:10.683896       1 config.go:192] "Starting service config controller"
	I0729 16:57:10.683944       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 16:57:10.683986       1 config.go:101] "Starting endpoint slice config controller"
	I0729 16:57:10.683990       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 16:57:10.684414       1 config.go:319] "Starting node config controller"
	I0729 16:57:10.684443       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 16:57:10.822005       1 shared_informer.go:320] Caches are synced for node config
	I0729 16:57:10.822295       1 shared_informer.go:320] Caches are synced for service config
	I0729 16:57:10.822317       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [79298bbe1b2337e09867ac114b4875e36699d0eb1ba5b9725b468712ac570005] <==
	W0729 16:56:50.943176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 16:56:50.945597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 16:56:50.943858       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 16:56:50.943979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 16:56:50.943995       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 16:56:50.944160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 16:56:51.789063       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 16:56:51.789188       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 16:56:51.794605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 16:56:51.794730       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 16:56:51.817477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 16:56:51.817755       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 16:56:51.845352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 16:56:51.845439       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 16:56:51.949743       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 16:56:51.949932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 16:56:51.961875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 16:56:51.961992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 16:56:51.966713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 16:56:51.966876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 16:56:51.976744       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 16:56:51.977282       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 16:56:52.066562       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 16:56:52.067652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0729 16:56:53.937937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 17:01:58 addons-433102 kubelet[1272]: I0729 17:01:58.153246    1272 memory_manager.go:354] "RemoveStaleState removing state" podUID="74758328-6b55-4582-879f-f946ae6e0195" containerName="local-path-provisioner"
	Jul 29 17:01:58 addons-433102 kubelet[1272]: I0729 17:01:58.153257    1272 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb809283-0294-420c-8d5a-7cd999443c2b" containerName="headlamp"
	Jul 29 17:01:58 addons-433102 kubelet[1272]: I0729 17:01:58.299999    1272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b82v5\" (UniqueName: \"kubernetes.io/projected/773dada2-8958-460d-b4f8-53d9981e74ab-kube-api-access-b82v5\") pod \"hello-world-app-6778b5fc9f-cz2bv\" (UID: \"773dada2-8958-460d-b4f8-53d9981e74ab\") " pod="default/hello-world-app-6778b5fc9f-cz2bv"
	Jul 29 17:01:59 addons-433102 kubelet[1272]: I0729 17:01:59.408302    1272 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzfml\" (UniqueName: \"kubernetes.io/projected/e7277800-f99a-44f9-8098-4c1bf978bf95-kube-api-access-rzfml\") pod \"e7277800-f99a-44f9-8098-4c1bf978bf95\" (UID: \"e7277800-f99a-44f9-8098-4c1bf978bf95\") "
	Jul 29 17:01:59 addons-433102 kubelet[1272]: I0729 17:01:59.411326    1272 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7277800-f99a-44f9-8098-4c1bf978bf95-kube-api-access-rzfml" (OuterVolumeSpecName: "kube-api-access-rzfml") pod "e7277800-f99a-44f9-8098-4c1bf978bf95" (UID: "e7277800-f99a-44f9-8098-4c1bf978bf95"). InnerVolumeSpecName "kube-api-access-rzfml". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 17:01:59 addons-433102 kubelet[1272]: I0729 17:01:59.510115    1272 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rzfml\" (UniqueName: \"kubernetes.io/projected/e7277800-f99a-44f9-8098-4c1bf978bf95-kube-api-access-rzfml\") on node \"addons-433102\" DevicePath \"\""
	Jul 29 17:01:59 addons-433102 kubelet[1272]: I0729 17:01:59.984956    1272 scope.go:117] "RemoveContainer" containerID="f35d6ba1e6c81601b26aa45042bf7c3cc4144bca23d30d1312b6512931a4b1b5"
	Jul 29 17:02:00 addons-433102 kubelet[1272]: I0729 17:02:00.023262    1272 scope.go:117] "RemoveContainer" containerID="f35d6ba1e6c81601b26aa45042bf7c3cc4144bca23d30d1312b6512931a4b1b5"
	Jul 29 17:02:00 addons-433102 kubelet[1272]: E0729 17:02:00.024623    1272 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f35d6ba1e6c81601b26aa45042bf7c3cc4144bca23d30d1312b6512931a4b1b5\": container with ID starting with f35d6ba1e6c81601b26aa45042bf7c3cc4144bca23d30d1312b6512931a4b1b5 not found: ID does not exist" containerID="f35d6ba1e6c81601b26aa45042bf7c3cc4144bca23d30d1312b6512931a4b1b5"
	Jul 29 17:02:00 addons-433102 kubelet[1272]: I0729 17:02:00.024662    1272 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f35d6ba1e6c81601b26aa45042bf7c3cc4144bca23d30d1312b6512931a4b1b5"} err="failed to get container status \"f35d6ba1e6c81601b26aa45042bf7c3cc4144bca23d30d1312b6512931a4b1b5\": rpc error: code = NotFound desc = could not find container \"f35d6ba1e6c81601b26aa45042bf7c3cc4144bca23d30d1312b6512931a4b1b5\": container with ID starting with f35d6ba1e6c81601b26aa45042bf7c3cc4144bca23d30d1312b6512931a4b1b5 not found: ID does not exist"
	Jul 29 17:02:00 addons-433102 kubelet[1272]: I0729 17:02:00.035653    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-cz2bv" podStartSLOduration=1.305370518 podStartE2EDuration="2.035632833s" podCreationTimestamp="2024-07-29 17:01:58 +0000 UTC" firstStartedPulling="2024-07-29 17:01:58.737031373 +0000 UTC m=+305.551933390" lastFinishedPulling="2024-07-29 17:01:59.467293685 +0000 UTC m=+306.282195705" observedRunningTime="2024-07-29 17:02:00.007052258 +0000 UTC m=+306.821954294" watchObservedRunningTime="2024-07-29 17:02:00.035632833 +0000 UTC m=+306.850534869"
	Jul 29 17:02:01 addons-433102 kubelet[1272]: I0729 17:02:01.327781    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cafb5cf-a3ce-4182-bfcb-861cb20d8d31" path="/var/lib/kubelet/pods/3cafb5cf-a3ce-4182-bfcb-861cb20d8d31/volumes"
	Jul 29 17:02:01 addons-433102 kubelet[1272]: I0729 17:02:01.328328    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dd37089-7382-4c1f-976d-fd39b4d60eeb" path="/var/lib/kubelet/pods/6dd37089-7382-4c1f-976d-fd39b4d60eeb/volumes"
	Jul 29 17:02:01 addons-433102 kubelet[1272]: I0729 17:02:01.328673    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7277800-f99a-44f9-8098-4c1bf978bf95" path="/var/lib/kubelet/pods/e7277800-f99a-44f9-8098-4c1bf978bf95/volumes"
	Jul 29 17:02:03 addons-433102 kubelet[1272]: I0729 17:02:03.946178    1272 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f6e329b-89b7-469e-9fa5-764c8a67b3c1-webhook-cert\") pod \"0f6e329b-89b7-469e-9fa5-764c8a67b3c1\" (UID: \"0f6e329b-89b7-469e-9fa5-764c8a67b3c1\") "
	Jul 29 17:02:03 addons-433102 kubelet[1272]: I0729 17:02:03.946229    1272 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv7zb\" (UniqueName: \"kubernetes.io/projected/0f6e329b-89b7-469e-9fa5-764c8a67b3c1-kube-api-access-qv7zb\") pod \"0f6e329b-89b7-469e-9fa5-764c8a67b3c1\" (UID: \"0f6e329b-89b7-469e-9fa5-764c8a67b3c1\") "
	Jul 29 17:02:03 addons-433102 kubelet[1272]: I0729 17:02:03.953996    1272 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f6e329b-89b7-469e-9fa5-764c8a67b3c1-kube-api-access-qv7zb" (OuterVolumeSpecName: "kube-api-access-qv7zb") pod "0f6e329b-89b7-469e-9fa5-764c8a67b3c1" (UID: "0f6e329b-89b7-469e-9fa5-764c8a67b3c1"). InnerVolumeSpecName "kube-api-access-qv7zb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 17:02:03 addons-433102 kubelet[1272]: I0729 17:02:03.954090    1272 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6e329b-89b7-469e-9fa5-764c8a67b3c1-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0f6e329b-89b7-469e-9fa5-764c8a67b3c1" (UID: "0f6e329b-89b7-469e-9fa5-764c8a67b3c1"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 29 17:02:04 addons-433102 kubelet[1272]: I0729 17:02:04.015532    1272 scope.go:117] "RemoveContainer" containerID="7be8ce37433cfd267f782b669559c0cefdba045cc225e2090600ed0a2061e472"
	Jul 29 17:02:04 addons-433102 kubelet[1272]: I0729 17:02:04.046664    1272 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f6e329b-89b7-469e-9fa5-764c8a67b3c1-webhook-cert\") on node \"addons-433102\" DevicePath \"\""
	Jul 29 17:02:04 addons-433102 kubelet[1272]: I0729 17:02:04.046688    1272 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qv7zb\" (UniqueName: \"kubernetes.io/projected/0f6e329b-89b7-469e-9fa5-764c8a67b3c1-kube-api-access-qv7zb\") on node \"addons-433102\" DevicePath \"\""
	Jul 29 17:02:04 addons-433102 kubelet[1272]: I0729 17:02:04.049899    1272 scope.go:117] "RemoveContainer" containerID="7be8ce37433cfd267f782b669559c0cefdba045cc225e2090600ed0a2061e472"
	Jul 29 17:02:04 addons-433102 kubelet[1272]: E0729 17:02:04.050357    1272 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7be8ce37433cfd267f782b669559c0cefdba045cc225e2090600ed0a2061e472\": container with ID starting with 7be8ce37433cfd267f782b669559c0cefdba045cc225e2090600ed0a2061e472 not found: ID does not exist" containerID="7be8ce37433cfd267f782b669559c0cefdba045cc225e2090600ed0a2061e472"
	Jul 29 17:02:04 addons-433102 kubelet[1272]: I0729 17:02:04.050383    1272 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7be8ce37433cfd267f782b669559c0cefdba045cc225e2090600ed0a2061e472"} err="failed to get container status \"7be8ce37433cfd267f782b669559c0cefdba045cc225e2090600ed0a2061e472\": rpc error: code = NotFound desc = could not find container \"7be8ce37433cfd267f782b669559c0cefdba045cc225e2090600ed0a2061e472\": container with ID starting with 7be8ce37433cfd267f782b669559c0cefdba045cc225e2090600ed0a2061e472 not found: ID does not exist"
	Jul 29 17:02:05 addons-433102 kubelet[1272]: I0729 17:02:05.326872    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f6e329b-89b7-469e-9fa5-764c8a67b3c1" path="/var/lib/kubelet/pods/0f6e329b-89b7-469e-9fa5-764c8a67b3c1/volumes"
	
	
	==> storage-provisioner [2599cbcd1abdc6363e42cd84b81942c2062fb81ff763fe53dd406df7addc2b42] <==
	I0729 16:57:14.026382       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 16:57:14.055161       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 16:57:14.055234       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 16:57:14.083407       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 16:57:14.083582       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-433102_5781e976-6e08-4c8c-9c60-c03e601c784d!
	I0729 16:57:14.091406       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"14133a61-5614-4788-b090-089c59317928", APIVersion:"v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-433102_5781e976-6e08-4c8c-9c60-c03e601c784d became leader
	I0729 16:57:14.184561       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-433102_5781e976-6e08-4c8c-9c60-c03e601c784d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-433102 -n addons-433102
helpers_test.go:261: (dbg) Run:  kubectl --context addons-433102 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (362.22s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.670813ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-fdwdm" [377d84f1-430a-423a-8e08-3ffc0e083b56] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005508737s
addons_test.go:417: (dbg) Run:  kubectl --context addons-433102 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-433102 top pods -n kube-system: exit status 1 (82.020202ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-433102, age: 2m10.849498006s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-433102 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-433102 top pods -n kube-system: exit status 1 (66.192386ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-chxlc, age: 2m1.362267101s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-433102 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-433102 top pods -n kube-system: exit status 1 (70.964691ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-chxlc, age: 2m6.997300184s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-433102 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-433102 top pods -n kube-system: exit status 1 (66.984686ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-chxlc, age: 2m10.498918112s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-433102 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-433102 top pods -n kube-system: exit status 1 (75.886612ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-chxlc, age: 2m23.515067203s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-433102 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-433102 top pods -n kube-system: exit status 1 (65.507074ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-chxlc, age: 2m39.365249321s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-433102 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-433102 top pods -n kube-system: exit status 1 (62.864551ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-chxlc, age: 2m56.174573103s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-433102 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-433102 top pods -n kube-system: exit status 1 (60.374702ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-chxlc, age: 3m37.994189408s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-433102 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-433102 top pods -n kube-system: exit status 1 (61.043183ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-chxlc, age: 4m20.003202935s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-433102 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-433102 top pods -n kube-system: exit status 1 (64.0185ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-chxlc, age: 4m56.482337416s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-433102 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-433102 top pods -n kube-system: exit status 1 (59.647117ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-chxlc, age: 5m34.542715978s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-433102 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-433102 top pods -n kube-system: exit status 1 (60.792385ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-chxlc, age: 6m28.585905436s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-433102 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-433102 top pods -n kube-system: exit status 1 (64.887493ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-chxlc, age: 7m50.373368371s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-433102 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-433102 -n addons-433102
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-433102 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-433102 logs -n 25: (1.240175078s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-254884                                                                     | download-only-254884 | jenkins | v1.33.1 | 29 Jul 24 16:56 UTC | 29 Jul 24 16:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-601375 | jenkins | v1.33.1 | 29 Jul 24 16:56 UTC |                     |
	|         | binary-mirror-601375                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35651                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-601375                                                                     | binary-mirror-601375 | jenkins | v1.33.1 | 29 Jul 24 16:56 UTC | 29 Jul 24 16:56 UTC |
	| addons  | disable dashboard -p                                                                        | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:56 UTC |                     |
	|         | addons-433102                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:56 UTC |                     |
	|         | addons-433102                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-433102 --wait=true                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:56 UTC | 29 Jul 24 16:58 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-433102 addons disable                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:58 UTC | 29 Jul 24 16:58 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | addons-433102                                                                               |                      |         |         |                     |                     |
	| ip      | addons-433102 ip                                                                            | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	| addons  | addons-433102 addons disable                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-433102 addons disable                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-433102 addons disable                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-433102 ssh cat                                                                       | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | /opt/local-path-provisioner/pvc-b5b14fe5-d708-427a-a913-c11d781bebaf_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-433102 addons disable                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 17:00 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | -p addons-433102                                                                            |                      |         |         |                     |                     |
	| addons  | addons-433102 addons                                                                        | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-433102 ssh curl -s                                                                   | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-433102 addons                                                                        | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | addons-433102                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 16:59 UTC | 29 Jul 24 16:59 UTC |
	|         | -p addons-433102                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-433102 addons disable                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 17:00 UTC | 29 Jul 24 17:00 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-433102 ip                                                                            | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 17:01 UTC | 29 Jul 24 17:01 UTC |
	| addons  | addons-433102 addons disable                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 17:01 UTC | 29 Jul 24 17:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-433102 addons disable                                                                | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 17:02 UTC | 29 Jul 24 17:02 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-433102 addons                                                                        | addons-433102        | jenkins | v1.33.1 | 29 Jul 24 17:04 UTC | 29 Jul 24 17:04 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 16:56:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 16:56:12.537777   19312 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:56:12.538015   19312 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:56:12.538024   19312 out.go:304] Setting ErrFile to fd 2...
	I0729 16:56:12.538028   19312 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:56:12.538238   19312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 16:56:12.538849   19312 out.go:298] Setting JSON to false
	I0729 16:56:12.539664   19312 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2324,"bootTime":1722269848,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 16:56:12.539717   19312 start.go:139] virtualization: kvm guest
	I0729 16:56:12.541718   19312 out.go:177] * [addons-433102] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 16:56:12.542895   19312 notify.go:220] Checking for updates...
	I0729 16:56:12.542944   19312 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 16:56:12.544196   19312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:56:12.545438   19312 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 16:56:12.546652   19312 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 16:56:12.547719   19312 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 16:56:12.549071   19312 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:56:12.550219   19312 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:56:12.581032   19312 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 16:56:12.582227   19312 start.go:297] selected driver: kvm2
	I0729 16:56:12.582243   19312 start.go:901] validating driver "kvm2" against <nil>
	I0729 16:56:12.582254   19312 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:56:12.583060   19312 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:56:12.583149   19312 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 16:56:12.598402   19312 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 16:56:12.598471   19312 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:56:12.598672   19312 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:56:12.598725   19312 cni.go:84] Creating CNI manager for ""
	I0729 16:56:12.598738   19312 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 16:56:12.598749   19312 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:56:12.598807   19312 start.go:340] cluster config:
	{Name:addons-433102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-433102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:56:12.598908   19312 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:56:12.600747   19312 out.go:177] * Starting "addons-433102" primary control-plane node in "addons-433102" cluster
	I0729 16:56:12.601910   19312 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 16:56:12.601939   19312 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 16:56:12.601947   19312 cache.go:56] Caching tarball of preloaded images
	I0729 16:56:12.602010   19312 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 16:56:12.602022   19312 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 16:56:12.602308   19312 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/config.json ...
	I0729 16:56:12.602326   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/config.json: {Name:mk66c10df021b2afa4711063c3ac523ffeb47dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:12.602484   19312 start.go:360] acquireMachinesLock for addons-433102: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:56:12.602546   19312 start.go:364] duration metric: took 44.417µs to acquireMachinesLock for "addons-433102"
	I0729 16:56:12.602568   19312 start.go:93] Provisioning new machine with config: &{Name:addons-433102 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-433102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 16:56:12.602628   19312 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 16:56:12.604389   19312 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 16:56:12.604498   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:56:12.604539   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:56:12.618272   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
	I0729 16:56:12.618673   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:56:12.619205   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:56:12.619220   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:56:12.619540   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:56:12.619745   19312 main.go:141] libmachine: (addons-433102) Calling .GetMachineName
	I0729 16:56:12.619875   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:56:12.620037   19312 start.go:159] libmachine.API.Create for "addons-433102" (driver="kvm2")
	I0729 16:56:12.620061   19312 client.go:168] LocalClient.Create starting
	I0729 16:56:12.620093   19312 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem
	I0729 16:56:12.832657   19312 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem
	I0729 16:56:12.988338   19312 main.go:141] libmachine: Running pre-create checks...
	I0729 16:56:12.988362   19312 main.go:141] libmachine: (addons-433102) Calling .PreCreateCheck
	I0729 16:56:12.988873   19312 main.go:141] libmachine: (addons-433102) Calling .GetConfigRaw
	I0729 16:56:12.989360   19312 main.go:141] libmachine: Creating machine...
	I0729 16:56:12.989375   19312 main.go:141] libmachine: (addons-433102) Calling .Create
	I0729 16:56:12.989557   19312 main.go:141] libmachine: (addons-433102) Creating KVM machine...
	I0729 16:56:12.990770   19312 main.go:141] libmachine: (addons-433102) DBG | found existing default KVM network
	I0729 16:56:12.991463   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:12.991317   19335 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0729 16:56:12.991481   19312 main.go:141] libmachine: (addons-433102) DBG | created network xml: 
	I0729 16:56:12.991525   19312 main.go:141] libmachine: (addons-433102) DBG | <network>
	I0729 16:56:12.991554   19312 main.go:141] libmachine: (addons-433102) DBG |   <name>mk-addons-433102</name>
	I0729 16:56:12.991568   19312 main.go:141] libmachine: (addons-433102) DBG |   <dns enable='no'/>
	I0729 16:56:12.991579   19312 main.go:141] libmachine: (addons-433102) DBG |   
	I0729 16:56:12.991595   19312 main.go:141] libmachine: (addons-433102) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 16:56:12.991606   19312 main.go:141] libmachine: (addons-433102) DBG |     <dhcp>
	I0729 16:56:12.991616   19312 main.go:141] libmachine: (addons-433102) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 16:56:12.991625   19312 main.go:141] libmachine: (addons-433102) DBG |     </dhcp>
	I0729 16:56:12.991636   19312 main.go:141] libmachine: (addons-433102) DBG |   </ip>
	I0729 16:56:12.991646   19312 main.go:141] libmachine: (addons-433102) DBG |   
	I0729 16:56:12.991672   19312 main.go:141] libmachine: (addons-433102) DBG | </network>
	I0729 16:56:12.991699   19312 main.go:141] libmachine: (addons-433102) DBG | 
	I0729 16:56:12.996884   19312 main.go:141] libmachine: (addons-433102) DBG | trying to create private KVM network mk-addons-433102 192.168.39.0/24...
	I0729 16:56:13.058076   19312 main.go:141] libmachine: (addons-433102) DBG | private KVM network mk-addons-433102 192.168.39.0/24 created
	I0729 16:56:13.058096   19312 main.go:141] libmachine: (addons-433102) Setting up store path in /home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102 ...
	I0729 16:56:13.058108   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:13.058053   19335 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 16:56:13.058176   19312 main.go:141] libmachine: (addons-433102) Building disk image from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 16:56:13.058216   19312 main.go:141] libmachine: (addons-433102) Downloading /home/jenkins/minikube-integration/19345-11206/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 16:56:13.312991   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:13.312850   19335 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa...
	I0729 16:56:13.604795   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:13.604672   19335 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/addons-433102.rawdisk...
	I0729 16:56:13.604817   19312 main.go:141] libmachine: (addons-433102) DBG | Writing magic tar header
	I0729 16:56:13.604826   19312 main.go:141] libmachine: (addons-433102) DBG | Writing SSH key tar header
	I0729 16:56:13.604834   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:13.604781   19335 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102 ...
	I0729 16:56:13.604898   19312 main.go:141] libmachine: (addons-433102) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102
	I0729 16:56:13.604926   19312 main.go:141] libmachine: (addons-433102) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines
	I0729 16:56:13.604943   19312 main.go:141] libmachine: (addons-433102) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102 (perms=drwx------)
	I0729 16:56:13.604960   19312 main.go:141] libmachine: (addons-433102) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 16:56:13.604971   19312 main.go:141] libmachine: (addons-433102) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines (perms=drwxr-xr-x)
	I0729 16:56:13.604983   19312 main.go:141] libmachine: (addons-433102) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube (perms=drwxr-xr-x)
	I0729 16:56:13.604990   19312 main.go:141] libmachine: (addons-433102) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206 (perms=drwxrwxr-x)
	I0729 16:56:13.604998   19312 main.go:141] libmachine: (addons-433102) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 16:56:13.605004   19312 main.go:141] libmachine: (addons-433102) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 16:56:13.605010   19312 main.go:141] libmachine: (addons-433102) Creating domain...
	I0729 16:56:13.605041   19312 main.go:141] libmachine: (addons-433102) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206
	I0729 16:56:13.605062   19312 main.go:141] libmachine: (addons-433102) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 16:56:13.605076   19312 main.go:141] libmachine: (addons-433102) DBG | Checking permissions on dir: /home/jenkins
	I0729 16:56:13.605091   19312 main.go:141] libmachine: (addons-433102) DBG | Checking permissions on dir: /home
	I0729 16:56:13.605103   19312 main.go:141] libmachine: (addons-433102) DBG | Skipping /home - not owner
	I0729 16:56:13.605992   19312 main.go:141] libmachine: (addons-433102) define libvirt domain using xml: 
	I0729 16:56:13.606019   19312 main.go:141] libmachine: (addons-433102) <domain type='kvm'>
	I0729 16:56:13.606027   19312 main.go:141] libmachine: (addons-433102)   <name>addons-433102</name>
	I0729 16:56:13.606033   19312 main.go:141] libmachine: (addons-433102)   <memory unit='MiB'>4000</memory>
	I0729 16:56:13.606038   19312 main.go:141] libmachine: (addons-433102)   <vcpu>2</vcpu>
	I0729 16:56:13.606043   19312 main.go:141] libmachine: (addons-433102)   <features>
	I0729 16:56:13.606049   19312 main.go:141] libmachine: (addons-433102)     <acpi/>
	I0729 16:56:13.606054   19312 main.go:141] libmachine: (addons-433102)     <apic/>
	I0729 16:56:13.606059   19312 main.go:141] libmachine: (addons-433102)     <pae/>
	I0729 16:56:13.606068   19312 main.go:141] libmachine: (addons-433102)     
	I0729 16:56:13.606079   19312 main.go:141] libmachine: (addons-433102)   </features>
	I0729 16:56:13.606090   19312 main.go:141] libmachine: (addons-433102)   <cpu mode='host-passthrough'>
	I0729 16:56:13.606100   19312 main.go:141] libmachine: (addons-433102)   
	I0729 16:56:13.606109   19312 main.go:141] libmachine: (addons-433102)   </cpu>
	I0729 16:56:13.606118   19312 main.go:141] libmachine: (addons-433102)   <os>
	I0729 16:56:13.606130   19312 main.go:141] libmachine: (addons-433102)     <type>hvm</type>
	I0729 16:56:13.606139   19312 main.go:141] libmachine: (addons-433102)     <boot dev='cdrom'/>
	I0729 16:56:13.606144   19312 main.go:141] libmachine: (addons-433102)     <boot dev='hd'/>
	I0729 16:56:13.606153   19312 main.go:141] libmachine: (addons-433102)     <bootmenu enable='no'/>
	I0729 16:56:13.606162   19312 main.go:141] libmachine: (addons-433102)   </os>
	I0729 16:56:13.606174   19312 main.go:141] libmachine: (addons-433102)   <devices>
	I0729 16:56:13.606185   19312 main.go:141] libmachine: (addons-433102)     <disk type='file' device='cdrom'>
	I0729 16:56:13.606200   19312 main.go:141] libmachine: (addons-433102)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/boot2docker.iso'/>
	I0729 16:56:13.606212   19312 main.go:141] libmachine: (addons-433102)       <target dev='hdc' bus='scsi'/>
	I0729 16:56:13.606238   19312 main.go:141] libmachine: (addons-433102)       <readonly/>
	I0729 16:56:13.606256   19312 main.go:141] libmachine: (addons-433102)     </disk>
	I0729 16:56:13.606266   19312 main.go:141] libmachine: (addons-433102)     <disk type='file' device='disk'>
	I0729 16:56:13.606275   19312 main.go:141] libmachine: (addons-433102)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 16:56:13.606295   19312 main.go:141] libmachine: (addons-433102)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/addons-433102.rawdisk'/>
	I0729 16:56:13.606303   19312 main.go:141] libmachine: (addons-433102)       <target dev='hda' bus='virtio'/>
	I0729 16:56:13.606309   19312 main.go:141] libmachine: (addons-433102)     </disk>
	I0729 16:56:13.606316   19312 main.go:141] libmachine: (addons-433102)     <interface type='network'>
	I0729 16:56:13.606335   19312 main.go:141] libmachine: (addons-433102)       <source network='mk-addons-433102'/>
	I0729 16:56:13.606354   19312 main.go:141] libmachine: (addons-433102)       <model type='virtio'/>
	I0729 16:56:13.606382   19312 main.go:141] libmachine: (addons-433102)     </interface>
	I0729 16:56:13.606412   19312 main.go:141] libmachine: (addons-433102)     <interface type='network'>
	I0729 16:56:13.606426   19312 main.go:141] libmachine: (addons-433102)       <source network='default'/>
	I0729 16:56:13.606436   19312 main.go:141] libmachine: (addons-433102)       <model type='virtio'/>
	I0729 16:56:13.606447   19312 main.go:141] libmachine: (addons-433102)     </interface>
	I0729 16:56:13.606457   19312 main.go:141] libmachine: (addons-433102)     <serial type='pty'>
	I0729 16:56:13.606469   19312 main.go:141] libmachine: (addons-433102)       <target port='0'/>
	I0729 16:56:13.606482   19312 main.go:141] libmachine: (addons-433102)     </serial>
	I0729 16:56:13.606494   19312 main.go:141] libmachine: (addons-433102)     <console type='pty'>
	I0729 16:56:13.606505   19312 main.go:141] libmachine: (addons-433102)       <target type='serial' port='0'/>
	I0729 16:56:13.606516   19312 main.go:141] libmachine: (addons-433102)     </console>
	I0729 16:56:13.606526   19312 main.go:141] libmachine: (addons-433102)     <rng model='virtio'>
	I0729 16:56:13.606540   19312 main.go:141] libmachine: (addons-433102)       <backend model='random'>/dev/random</backend>
	I0729 16:56:13.606554   19312 main.go:141] libmachine: (addons-433102)     </rng>
	I0729 16:56:13.606565   19312 main.go:141] libmachine: (addons-433102)     
	I0729 16:56:13.606575   19312 main.go:141] libmachine: (addons-433102)     
	I0729 16:56:13.606586   19312 main.go:141] libmachine: (addons-433102)   </devices>
	I0729 16:56:13.606595   19312 main.go:141] libmachine: (addons-433102) </domain>
	I0729 16:56:13.606608   19312 main.go:141] libmachine: (addons-433102) 
	I0729 16:56:13.613032   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:c5:e6:5e in network default
	I0729 16:56:13.613640   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:13.613678   19312 main.go:141] libmachine: (addons-433102) Ensuring networks are active...
	I0729 16:56:13.614340   19312 main.go:141] libmachine: (addons-433102) Ensuring network default is active
	I0729 16:56:13.614621   19312 main.go:141] libmachine: (addons-433102) Ensuring network mk-addons-433102 is active
	I0729 16:56:13.615116   19312 main.go:141] libmachine: (addons-433102) Getting domain xml...
	I0729 16:56:13.615767   19312 main.go:141] libmachine: (addons-433102) Creating domain...
	I0729 16:56:14.844379   19312 main.go:141] libmachine: (addons-433102) Waiting to get IP...
	I0729 16:56:14.845065   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:14.845426   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:14.845461   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:14.845411   19335 retry.go:31] will retry after 197.612216ms: waiting for machine to come up
	I0729 16:56:15.044833   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:15.045272   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:15.045299   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:15.045239   19335 retry.go:31] will retry after 327.669215ms: waiting for machine to come up
	I0729 16:56:15.374701   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:15.375059   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:15.375081   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:15.375032   19335 retry.go:31] will retry after 438.226444ms: waiting for machine to come up
	I0729 16:56:15.814684   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:15.815075   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:15.815103   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:15.815044   19335 retry.go:31] will retry after 451.065107ms: waiting for machine to come up
	I0729 16:56:16.267236   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:16.267570   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:16.267593   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:16.267543   19335 retry.go:31] will retry after 521.416625ms: waiting for machine to come up
	I0729 16:56:16.790575   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:16.790918   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:16.790965   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:16.790901   19335 retry.go:31] will retry after 941.217092ms: waiting for machine to come up
	I0729 16:56:17.733555   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:17.733988   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:17.734016   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:17.733945   19335 retry.go:31] will retry after 760.216596ms: waiting for machine to come up
	I0729 16:56:18.495589   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:18.496176   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:18.496215   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:18.496148   19335 retry.go:31] will retry after 998.832856ms: waiting for machine to come up
	I0729 16:56:19.496581   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:19.497020   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:19.497049   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:19.496970   19335 retry.go:31] will retry after 1.125358061s: waiting for machine to come up
	I0729 16:56:20.624351   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:20.624730   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:20.624760   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:20.624681   19335 retry.go:31] will retry after 1.46315279s: waiting for machine to come up
	I0729 16:56:22.090636   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:22.091015   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:22.091036   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:22.090991   19335 retry.go:31] will retry after 2.121606251s: waiting for machine to come up
	I0729 16:56:24.215078   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:24.215499   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:24.215527   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:24.215464   19335 retry.go:31] will retry after 2.844738203s: waiting for machine to come up
	I0729 16:56:27.063713   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:27.064234   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:27.064256   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:27.064195   19335 retry.go:31] will retry after 4.421324382s: waiting for machine to come up
	I0729 16:56:31.488709   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:31.489095   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find current IP address of domain addons-433102 in network mk-addons-433102
	I0729 16:56:31.489174   19312 main.go:141] libmachine: (addons-433102) DBG | I0729 16:56:31.489085   19335 retry.go:31] will retry after 4.584980769s: waiting for machine to come up
	I0729 16:56:36.077804   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.078382   19312 main.go:141] libmachine: (addons-433102) Found IP for machine: 192.168.39.73
	I0729 16:56:36.078399   19312 main.go:141] libmachine: (addons-433102) Reserving static IP address...
	I0729 16:56:36.078430   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has current primary IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.078830   19312 main.go:141] libmachine: (addons-433102) DBG | unable to find host DHCP lease matching {name: "addons-433102", mac: "52:54:00:d8:3f:00", ip: "192.168.39.73"} in network mk-addons-433102
	I0729 16:56:36.147579   19312 main.go:141] libmachine: (addons-433102) DBG | Getting to WaitForSSH function...
	I0729 16:56:36.147609   19312 main.go:141] libmachine: (addons-433102) Reserved static IP address: 192.168.39.73
	I0729 16:56:36.147628   19312 main.go:141] libmachine: (addons-433102) Waiting for SSH to be available...
	I0729 16:56:36.149793   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.150186   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:36.150218   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.150436   19312 main.go:141] libmachine: (addons-433102) DBG | Using SSH client type: external
	I0729 16:56:36.150459   19312 main.go:141] libmachine: (addons-433102) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa (-rw-------)
	I0729 16:56:36.150488   19312 main.go:141] libmachine: (addons-433102) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 16:56:36.150506   19312 main.go:141] libmachine: (addons-433102) DBG | About to run SSH command:
	I0729 16:56:36.150518   19312 main.go:141] libmachine: (addons-433102) DBG | exit 0
	I0729 16:56:36.286191   19312 main.go:141] libmachine: (addons-433102) DBG | SSH cmd err, output: <nil>: 
	I0729 16:56:36.286473   19312 main.go:141] libmachine: (addons-433102) KVM machine creation complete!
	I0729 16:56:36.286764   19312 main.go:141] libmachine: (addons-433102) Calling .GetConfigRaw
	I0729 16:56:36.287302   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:56:36.287473   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:56:36.287605   19312 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 16:56:36.287619   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:56:36.288873   19312 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 16:56:36.288900   19312 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 16:56:36.288906   19312 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 16:56:36.288911   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:36.291004   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.291310   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:36.291330   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.291499   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:36.291685   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:36.291838   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:36.291990   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:36.292144   19312 main.go:141] libmachine: Using SSH client type: native
	I0729 16:56:36.292301   19312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 16:56:36.292311   19312 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 16:56:36.401591   19312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 16:56:36.401613   19312 main.go:141] libmachine: Detecting the provisioner...
	I0729 16:56:36.401621   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:36.404145   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.404456   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:36.404484   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.404614   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:36.404798   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:36.404955   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:36.405131   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:36.405258   19312 main.go:141] libmachine: Using SSH client type: native
	I0729 16:56:36.405423   19312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 16:56:36.405434   19312 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 16:56:36.519057   19312 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 16:56:36.519122   19312 main.go:141] libmachine: found compatible host: buildroot
	I0729 16:56:36.519131   19312 main.go:141] libmachine: Provisioning with buildroot...
	I0729 16:56:36.519139   19312 main.go:141] libmachine: (addons-433102) Calling .GetMachineName
	I0729 16:56:36.519385   19312 buildroot.go:166] provisioning hostname "addons-433102"
	I0729 16:56:36.519412   19312 main.go:141] libmachine: (addons-433102) Calling .GetMachineName
	I0729 16:56:36.519574   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:36.522009   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.522343   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:36.522390   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.522484   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:36.522647   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:36.522800   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:36.522944   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:36.523138   19312 main.go:141] libmachine: Using SSH client type: native
	I0729 16:56:36.523306   19312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 16:56:36.523318   19312 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-433102 && echo "addons-433102" | sudo tee /etc/hostname
	I0729 16:56:36.652947   19312 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-433102
	
	I0729 16:56:36.652988   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:36.655710   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.656041   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:36.656060   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.656267   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:36.656450   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:36.656655   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:36.656769   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:36.656931   19312 main.go:141] libmachine: Using SSH client type: native
	I0729 16:56:36.657131   19312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 16:56:36.657154   19312 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-433102' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-433102/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-433102' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 16:56:36.781429   19312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 16:56:36.781456   19312 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 16:56:36.781488   19312 buildroot.go:174] setting up certificates
	I0729 16:56:36.781499   19312 provision.go:84] configureAuth start
	I0729 16:56:36.781507   19312 main.go:141] libmachine: (addons-433102) Calling .GetMachineName
	I0729 16:56:36.781752   19312 main.go:141] libmachine: (addons-433102) Calling .GetIP
	I0729 16:56:36.784322   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.784779   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:36.784799   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.784968   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:36.787267   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.787582   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:36.787604   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:36.787749   19312 provision.go:143] copyHostCerts
	I0729 16:56:36.787820   19312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 16:56:36.787988   19312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 16:56:36.788086   19312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 16:56:36.788160   19312 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.addons-433102 san=[127.0.0.1 192.168.39.73 addons-433102 localhost minikube]
	I0729 16:56:37.053230   19312 provision.go:177] copyRemoteCerts
	I0729 16:56:37.053301   19312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 16:56:37.053329   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:37.056218   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.056619   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.056644   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.056802   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:37.056986   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:37.057148   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:37.057254   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:56:37.144532   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 16:56:37.168808   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 16:56:37.192168   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 16:56:37.215966   19312 provision.go:87] duration metric: took 434.454247ms to configureAuth
	I0729 16:56:37.215995   19312 buildroot.go:189] setting minikube options for container-runtime
	I0729 16:56:37.216181   19312 config.go:182] Loaded profile config "addons-433102": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 16:56:37.216264   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:37.218859   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.219159   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.219179   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.219393   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:37.219596   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:37.219767   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:37.219921   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:37.220156   19312 main.go:141] libmachine: Using SSH client type: native
	I0729 16:56:37.220330   19312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 16:56:37.220347   19312 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 16:56:37.587397   19312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 16:56:37.587427   19312 main.go:141] libmachine: Checking connection to Docker...
	I0729 16:56:37.587439   19312 main.go:141] libmachine: (addons-433102) Calling .GetURL
	I0729 16:56:37.588727   19312 main.go:141] libmachine: (addons-433102) DBG | Using libvirt version 6000000
	I0729 16:56:37.590958   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.591358   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.591392   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.591556   19312 main.go:141] libmachine: Docker is up and running!
	I0729 16:56:37.591571   19312 main.go:141] libmachine: Reticulating splines...
	I0729 16:56:37.591579   19312 client.go:171] duration metric: took 24.971510994s to LocalClient.Create
	I0729 16:56:37.591604   19312 start.go:167] duration metric: took 24.97156689s to libmachine.API.Create "addons-433102"
	I0729 16:56:37.591615   19312 start.go:293] postStartSetup for "addons-433102" (driver="kvm2")
	I0729 16:56:37.591629   19312 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 16:56:37.591649   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:56:37.591919   19312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 16:56:37.591948   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:37.593994   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.594301   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.594325   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.594530   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:37.594733   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:37.594895   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:37.595160   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:56:37.680694   19312 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 16:56:37.684797   19312 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 16:56:37.684818   19312 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 16:56:37.684880   19312 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 16:56:37.684909   19312 start.go:296] duration metric: took 93.279335ms for postStartSetup
	I0729 16:56:37.684944   19312 main.go:141] libmachine: (addons-433102) Calling .GetConfigRaw
	I0729 16:56:37.719242   19312 main.go:141] libmachine: (addons-433102) Calling .GetIP
	I0729 16:56:37.721882   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.722218   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.722245   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.722490   19312 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/config.json ...
	I0729 16:56:37.722664   19312 start.go:128] duration metric: took 25.120027034s to createHost
	I0729 16:56:37.722683   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:37.724959   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.725330   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.725361   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.725526   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:37.725688   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:37.725840   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:37.725972   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:37.726113   19312 main.go:141] libmachine: Using SSH client type: native
	I0729 16:56:37.726324   19312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 16:56:37.726340   19312 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 16:56:37.843053   19312 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722272197.801311336
	
	I0729 16:56:37.843081   19312 fix.go:216] guest clock: 1722272197.801311336
	I0729 16:56:37.843092   19312 fix.go:229] Guest: 2024-07-29 16:56:37.801311336 +0000 UTC Remote: 2024-07-29 16:56:37.722674098 +0000 UTC m=+25.217297489 (delta=78.637238ms)
	I0729 16:56:37.843119   19312 fix.go:200] guest clock delta is within tolerance: 78.637238ms
	I0729 16:56:37.843126   19312 start.go:83] releasing machines lock for "addons-433102", held for 25.240567796s
	I0729 16:56:37.843150   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:56:37.843417   19312 main.go:141] libmachine: (addons-433102) Calling .GetIP
	I0729 16:56:37.845864   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.846166   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.846191   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.846412   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:56:37.846847   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:56:37.847017   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:56:37.847124   19312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 16:56:37.847162   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:37.847250   19312 ssh_runner.go:195] Run: cat /version.json
	I0729 16:56:37.847272   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:56:37.849637   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.849819   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.849931   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.849955   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.850090   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:37.850112   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:37.850112   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:37.850273   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:56:37.850329   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:37.850420   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:56:37.850469   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:37.850528   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:56:37.850584   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:56:37.850610   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:56:37.954570   19312 ssh_runner.go:195] Run: systemctl --version
	I0729 16:56:37.960600   19312 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 16:56:38.126790   19312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 16:56:38.132613   19312 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 16:56:38.132670   19312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 16:56:38.149032   19312 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 16:56:38.149046   19312 start.go:495] detecting cgroup driver to use...
	I0729 16:56:38.149107   19312 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 16:56:38.164447   19312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 16:56:38.177490   19312 docker.go:217] disabling cri-docker service (if available) ...
	I0729 16:56:38.177530   19312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 16:56:38.190727   19312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 16:56:38.203787   19312 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 16:56:38.316392   19312 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 16:56:38.453610   19312 docker.go:233] disabling docker service ...
	I0729 16:56:38.453696   19312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 16:56:38.468227   19312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 16:56:38.481220   19312 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 16:56:38.615689   19312 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 16:56:38.755640   19312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 16:56:38.769430   19312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 16:56:38.787573   19312 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 16:56:38.787635   19312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 16:56:38.797782   19312 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 16:56:38.797847   19312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 16:56:38.808143   19312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 16:56:38.817840   19312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 16:56:38.827533   19312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 16:56:38.837729   19312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 16:56:38.847681   19312 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 16:56:38.863760   19312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 16:56:38.873190   19312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 16:56:38.881868   19312 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 16:56:38.881913   19312 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 16:56:38.895660   19312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 16:56:38.905721   19312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:56:39.031841   19312 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 16:56:39.161975   19312 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 16:56:39.162074   19312 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 16:56:39.166669   19312 start.go:563] Will wait 60s for crictl version
	I0729 16:56:39.166730   19312 ssh_runner.go:195] Run: which crictl
	I0729 16:56:39.170265   19312 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 16:56:39.206771   19312 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 16:56:39.206883   19312 ssh_runner.go:195] Run: crio --version
	I0729 16:56:39.233749   19312 ssh_runner.go:195] Run: crio --version
	I0729 16:56:39.263049   19312 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 16:56:39.264272   19312 main.go:141] libmachine: (addons-433102) Calling .GetIP
	I0729 16:56:39.266943   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:39.267277   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:56:39.267305   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:56:39.267488   19312 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 16:56:39.271361   19312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 16:56:39.283241   19312 kubeadm.go:883] updating cluster {Name:addons-433102 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-433102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 16:56:39.283349   19312 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 16:56:39.283408   19312 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 16:56:39.319986   19312 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 16:56:39.320046   19312 ssh_runner.go:195] Run: which lz4
	I0729 16:56:39.324087   19312 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 16:56:39.328259   19312 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 16:56:39.328284   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 16:56:40.634045   19312 crio.go:462] duration metric: took 1.309991465s to copy over tarball
	I0729 16:56:40.634124   19312 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 16:56:42.859750   19312 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.225591595s)
	I0729 16:56:42.859781   19312 crio.go:469] duration metric: took 2.225705873s to extract the tarball
	I0729 16:56:42.859789   19312 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 16:56:42.897612   19312 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 16:56:42.954927   19312 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 16:56:42.954947   19312 cache_images.go:84] Images are preloaded, skipping loading
	I0729 16:56:42.954954   19312 kubeadm.go:934] updating node { 192.168.39.73 8443 v1.30.3 crio true true} ...
	I0729 16:56:42.955063   19312 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-433102 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-433102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 16:56:42.955149   19312 ssh_runner.go:195] Run: crio config
	I0729 16:56:43.005626   19312 cni.go:84] Creating CNI manager for ""
	I0729 16:56:43.005648   19312 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 16:56:43.005658   19312 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 16:56:43.005681   19312 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.73 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-433102 NodeName:addons-433102 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 16:56:43.005834   19312 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-433102"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 16:56:43.005905   19312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 16:56:43.016291   19312 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 16:56:43.016348   19312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 16:56:43.026602   19312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 16:56:43.046001   19312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 16:56:43.064806   19312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0729 16:56:43.083849   19312 ssh_runner.go:195] Run: grep 192.168.39.73	control-plane.minikube.internal$ /etc/hosts
	I0729 16:56:43.087981   19312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 16:56:43.100088   19312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:56:43.224424   19312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:56:43.241475   19312 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102 for IP: 192.168.39.73
	I0729 16:56:43.241507   19312 certs.go:194] generating shared ca certs ...
	I0729 16:56:43.241523   19312 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.241661   19312 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 16:56:43.314518   19312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt ...
	I0729 16:56:43.314543   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt: {Name:mk7430f93e4eb66a7ae2250e2209426ae1a6ec80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.314691   19312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key ...
	I0729 16:56:43.314701   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key: {Name:mk343508971c6b777f48b3cf3c00a2a2d9184e15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.314773   19312 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 16:56:43.589451   19312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt ...
	I0729 16:56:43.589521   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt: {Name:mk193397c3fd162eb6f6b5a8a056aeb2bab9799e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.589701   19312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key ...
	I0729 16:56:43.589715   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key: {Name:mk0532d535b11308b747e8b70f9fa02e4226d30c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.589811   19312 certs.go:256] generating profile certs ...
	I0729 16:56:43.589880   19312 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.key
	I0729 16:56:43.589899   19312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt with IP's: []
	I0729 16:56:43.677844   19312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt ...
	I0729 16:56:43.677881   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: {Name:mk71f2a926e336f40bb13877ebd845ea67b83a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.678091   19312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.key ...
	I0729 16:56:43.678108   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.key: {Name:mkbfaa04a247d8372ad86365fe1cfd8ea3a8259e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.678220   19312 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.key.b26ac08d
	I0729 16:56:43.678247   19312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.crt.b26ac08d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.73]
	I0729 16:56:43.839546   19312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.crt.b26ac08d ...
	I0729 16:56:43.839577   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.crt.b26ac08d: {Name:mk71478c571d6b22412d1acff607c39fddebb84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.839754   19312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.key.b26ac08d ...
	I0729 16:56:43.839770   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.key.b26ac08d: {Name:mkc1fc6a26774617ef99371102f09cfd9edc163c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.839876   19312 certs.go:381] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.crt.b26ac08d -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.crt
	I0729 16:56:43.839967   19312 certs.go:385] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.key.b26ac08d -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.key
	I0729 16:56:43.840047   19312 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/proxy-client.key
	I0729 16:56:43.840071   19312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/proxy-client.crt with IP's: []
	I0729 16:56:43.913329   19312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/proxy-client.crt ...
	I0729 16:56:43.913356   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/proxy-client.crt: {Name:mk70ad2528d19153e54b6e99edab678b10352f19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.913523   19312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/proxy-client.key ...
	I0729 16:56:43.913537   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/proxy-client.key: {Name:mkf0a666bf793c173adb376a187ba2c0a6db82a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:43.913724   19312 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 16:56:43.913769   19312 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 16:56:43.913803   19312 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 16:56:43.913844   19312 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 16:56:43.914381   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 16:56:43.941664   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 16:56:43.966791   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 16:56:43.988404   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 16:56:44.011257   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 16:56:44.033777   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 16:56:44.056287   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 16:56:44.078663   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 16:56:44.102892   19312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 16:56:44.128607   19312 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 16:56:44.147578   19312 ssh_runner.go:195] Run: openssl version
	I0729 16:56:44.154852   19312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 16:56:44.168659   19312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:56:44.172919   19312 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:56:44.172968   19312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:56:44.178532   19312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 16:56:44.188955   19312 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 16:56:44.193148   19312 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 16:56:44.193199   19312 kubeadm.go:392] StartCluster: {Name:addons-433102 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-433102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:56:44.193269   19312 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 16:56:44.193304   19312 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 16:56:44.236499   19312 cri.go:89] found id: ""
	I0729 16:56:44.236579   19312 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 16:56:44.247426   19312 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:56:44.256812   19312 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:56:44.266162   19312 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 16:56:44.266181   19312 kubeadm.go:157] found existing configuration files:
	
	I0729 16:56:44.266223   19312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 16:56:44.275002   19312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 16:56:44.275050   19312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 16:56:44.284208   19312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 16:56:44.292818   19312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 16:56:44.292874   19312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 16:56:44.301657   19312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 16:56:44.310340   19312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 16:56:44.310404   19312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:56:44.319527   19312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 16:56:44.327978   19312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 16:56:44.328029   19312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:56:44.337088   19312 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 16:56:44.401919   19312 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 16:56:44.402182   19312 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 16:56:44.523563   19312 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 16:56:44.523734   19312 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 16:56:44.523909   19312 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 16:56:44.723453   19312 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 16:56:44.756902   19312 out.go:204]   - Generating certificates and keys ...
	I0729 16:56:44.757015   19312 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 16:56:44.757123   19312 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 16:56:45.031214   19312 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 16:56:45.204754   19312 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 16:56:45.404215   19312 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 16:56:45.599441   19312 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 16:56:45.830652   19312 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 16:56:45.830862   19312 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-433102 localhost] and IPs [192.168.39.73 127.0.0.1 ::1]
	I0729 16:56:45.946196   19312 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 16:56:45.946480   19312 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-433102 localhost] and IPs [192.168.39.73 127.0.0.1 ::1]
	I0729 16:56:46.088178   19312 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 16:56:46.199107   19312 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 16:56:46.264572   19312 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 16:56:46.264810   19312 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 16:56:46.465367   19312 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 16:56:46.571240   19312 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 16:56:46.611748   19312 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 16:56:46.731598   19312 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 16:56:47.004895   19312 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 16:56:47.005598   19312 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 16:56:47.007974   19312 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 16:56:47.009895   19312 out.go:204]   - Booting up control plane ...
	I0729 16:56:47.009995   19312 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 16:56:47.010091   19312 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 16:56:47.010176   19312 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 16:56:47.026160   19312 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 16:56:47.027140   19312 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 16:56:47.027203   19312 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 16:56:47.152677   19312 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 16:56:47.152752   19312 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 16:56:47.654669   19312 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.271717ms
	I0729 16:56:47.654750   19312 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 16:56:52.657144   19312 kubeadm.go:310] [api-check] The API server is healthy after 5.001911747s
	I0729 16:56:52.670659   19312 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 16:56:52.684613   19312 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 16:56:52.711926   19312 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 16:56:52.712173   19312 kubeadm.go:310] [mark-control-plane] Marking the node addons-433102 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 16:56:52.723734   19312 kubeadm.go:310] [bootstrap-token] Using token: w4q1ef.q8wav9dzw9ik2bkk
	I0729 16:56:52.725222   19312 out.go:204]   - Configuring RBAC rules ...
	I0729 16:56:52.725344   19312 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 16:56:52.730931   19312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 16:56:52.742158   19312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 16:56:52.746690   19312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 16:56:52.750329   19312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 16:56:52.753634   19312 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 16:56:53.065312   19312 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 16:56:53.516649   19312 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 16:56:54.065063   19312 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 16:56:54.065094   19312 kubeadm.go:310] 
	I0729 16:56:54.065170   19312 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 16:56:54.065182   19312 kubeadm.go:310] 
	I0729 16:56:54.065296   19312 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 16:56:54.065319   19312 kubeadm.go:310] 
	I0729 16:56:54.065370   19312 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 16:56:54.065462   19312 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 16:56:54.065542   19312 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 16:56:54.065550   19312 kubeadm.go:310] 
	I0729 16:56:54.065620   19312 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 16:56:54.065628   19312 kubeadm.go:310] 
	I0729 16:56:54.065705   19312 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 16:56:54.065720   19312 kubeadm.go:310] 
	I0729 16:56:54.065802   19312 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 16:56:54.065915   19312 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 16:56:54.066014   19312 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 16:56:54.066024   19312 kubeadm.go:310] 
	I0729 16:56:54.066130   19312 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 16:56:54.066250   19312 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 16:56:54.066261   19312 kubeadm.go:310] 
	I0729 16:56:54.066388   19312 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token w4q1ef.q8wav9dzw9ik2bkk \
	I0729 16:56:54.066543   19312 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 16:56:54.066572   19312 kubeadm.go:310] 	--control-plane 
	I0729 16:56:54.066584   19312 kubeadm.go:310] 
	I0729 16:56:54.066704   19312 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 16:56:54.066713   19312 kubeadm.go:310] 
	I0729 16:56:54.066811   19312 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token w4q1ef.q8wav9dzw9ik2bkk \
	I0729 16:56:54.066946   19312 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 16:56:54.067182   19312 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 16:56:54.067214   19312 cni.go:84] Creating CNI manager for ""
	I0729 16:56:54.067223   19312 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 16:56:54.069009   19312 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 16:56:54.070272   19312 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 16:56:54.081181   19312 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 16:56:54.099028   19312 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 16:56:54.099155   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:54.099192   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-433102 minikube.k8s.io/updated_at=2024_07_29T16_56_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=addons-433102 minikube.k8s.io/primary=true
	I0729 16:56:54.131399   19312 ops.go:34] apiserver oom_adj: -16
	I0729 16:56:54.228627   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:54.729164   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:55.228783   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:55.729076   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:56.229015   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:56.729127   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:57.228752   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:57.729245   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:58.228897   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:58.729020   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:59.229274   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:56:59.729385   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:00.229493   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:00.729518   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:01.229294   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:01.728836   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:02.229405   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:02.729489   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:03.229684   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:03.729378   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:04.229277   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:04.729357   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:05.229712   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:05.729553   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:06.229634   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:06.729285   19312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:57:06.810310   19312 kubeadm.go:1113] duration metric: took 12.711197871s to wait for elevateKubeSystemPrivileges
	I0729 16:57:06.810349   19312 kubeadm.go:394] duration metric: took 22.617153204s to StartCluster
	I0729 16:57:06.810382   19312 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:57:06.810539   19312 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 16:57:06.811023   19312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:57:06.811247   19312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 16:57:06.811255   19312 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 16:57:06.811317   19312 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 16:57:06.811427   19312 addons.go:69] Setting gcp-auth=true in profile "addons-433102"
	I0729 16:57:06.811450   19312 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-433102"
	I0729 16:57:06.811457   19312 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-433102"
	I0729 16:57:06.811467   19312 addons.go:69] Setting default-storageclass=true in profile "addons-433102"
	I0729 16:57:06.811472   19312 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-433102"
	I0729 16:57:06.811448   19312 config.go:182] Loaded profile config "addons-433102": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 16:57:06.811503   19312 addons.go:69] Setting ingress=true in profile "addons-433102"
	I0729 16:57:06.811503   19312 addons.go:69] Setting volcano=true in profile "addons-433102"
	I0729 16:57:06.811512   19312 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-433102"
	I0729 16:57:06.811515   19312 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-433102"
	I0729 16:57:06.811521   19312 addons.go:234] Setting addon ingress=true in "addons-433102"
	I0729 16:57:06.811525   19312 addons.go:234] Setting addon volcano=true in "addons-433102"
	I0729 16:57:06.811498   19312 addons.go:69] Setting helm-tiller=true in profile "addons-433102"
	I0729 16:57:06.811546   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.811550   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.811560   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.811568   19312 addons.go:234] Setting addon helm-tiller=true in "addons-433102"
	I0729 16:57:06.811595   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.811654   19312 addons.go:69] Setting cloud-spanner=true in profile "addons-433102"
	I0729 16:57:06.811677   19312 addons.go:234] Setting addon cloud-spanner=true in "addons-433102"
	I0729 16:57:06.811679   19312 addons.go:69] Setting yakd=true in profile "addons-433102"
	I0729 16:57:06.811696   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.811705   19312 addons.go:234] Setting addon yakd=true in "addons-433102"
	I0729 16:57:06.811728   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.811982   19312 addons.go:69] Setting ingress-dns=true in profile "addons-433102"
	I0729 16:57:06.812002   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812009   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812013   19312 addons.go:69] Setting inspektor-gadget=true in profile "addons-433102"
	I0729 16:57:06.812023   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812032   19312 addons.go:234] Setting addon inspektor-gadget=true in "addons-433102"
	I0729 16:57:06.812033   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812036   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812036   19312 addons.go:69] Setting volumesnapshots=true in profile "addons-433102"
	I0729 16:57:06.812045   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812053   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.812061   19312 addons.go:234] Setting addon volumesnapshots=true in "addons-433102"
	I0729 16:57:06.811487   19312 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-433102"
	I0729 16:57:06.812081   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812089   19312 addons.go:69] Setting metrics-server=true in profile "addons-433102"
	I0729 16:57:06.812023   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812107   19312 addons.go:234] Setting addon metrics-server=true in "addons-433102"
	I0729 16:57:06.812111   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812119   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812008   19312 addons.go:234] Setting addon ingress-dns=true in "addons-433102"
	I0729 16:57:06.812180   19312 addons.go:69] Setting storage-provisioner=true in profile "addons-433102"
	I0729 16:57:06.812194   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.812206   19312 addons.go:234] Setting addon storage-provisioner=true in "addons-433102"
	I0729 16:57:06.812354   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.812369   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812384   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.811489   19312 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-433102"
	I0729 16:57:06.812455   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812490   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812186   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812535   19312 addons.go:69] Setting registry=true in profile "addons-433102"
	I0729 16:57:06.812556   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812556   19312 addons.go:234] Setting addon registry=true in "addons-433102"
	I0729 16:57:06.812532   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.811462   19312 mustload.go:65] Loading cluster: addons-433102
	I0729 16:57:06.812589   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812608   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.812639   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.812710   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812725   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812728   19312 config.go:182] Loaded profile config "addons-433102": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 16:57:06.812764   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.812853   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812872   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812952   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.812976   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.812999   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.813025   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.813039   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.813254   19312 out.go:177] * Verifying Kubernetes components...
	I0729 16:57:06.815519   19312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:57:06.832562   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38689
	I0729 16:57:06.832584   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34009
	I0729 16:57:06.832725   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36721
	I0729 16:57:06.832738   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I0729 16:57:06.833048   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.833293   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.833384   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.833442   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.833611   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.833636   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.833832   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.833849   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.833976   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.833987   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.834101   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.834122   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.834182   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.834223   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.834226   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.834503   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.834518   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.834729   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.834759   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.834882   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.834919   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.838741   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40279
	I0729 16:57:06.838758   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42929
	I0729 16:57:06.838903   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.838913   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.838919   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.838937   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.838950   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.838995   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.839190   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.839229   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.840250   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.841011   19312 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-433102"
	I0729 16:57:06.841054   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.841406   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.841440   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.841930   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.841948   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.846404   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.847101   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.847135   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.850296   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32823
	I0729 16:57:06.850465   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.851050   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.851158   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.851178   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.851625   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.851642   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.851704   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.852082   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.852485   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.852520   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.852650   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.852671   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.861978   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44237
	I0729 16:57:06.862388   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.862868   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.862888   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.863261   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.863467   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.866329   19312 addons.go:234] Setting addon default-storageclass=true in "addons-433102"
	I0729 16:57:06.866402   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.866761   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.866779   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.868727   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43689
	I0729 16:57:06.869150   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.870414   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.870433   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.870774   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.870958   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.871680   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I0729 16:57:06.872075   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.882461   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33853
	I0729 16:57:06.882584   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.882601   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.882610   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44791
	I0729 16:57:06.882675   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.883038   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.883525   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.883538   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.883867   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.883984   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0729 16:57:06.884497   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.884515   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.884956   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.885569   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.885581   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.885882   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.886022   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.888151   19312 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 16:57:06.889225   19312 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 16:57:06.889245   19312 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 16:57:06.889264   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.889839   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41247
	I0729 16:57:06.889856   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.889961   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.892231   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 16:57:06.892647   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.893171   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.893197   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.893631   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.893819   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.893969   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.894092   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.894678   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.894869   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 16:57:06.895976   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 16:57:06.897109   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 16:57:06.897517   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.897538   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.897923   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.898479   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.898521   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.898785   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.898805   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.899327   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 16:57:06.900483   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 16:57:06.901433   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 16:57:06.902502   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 16:57:06.903157   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.903329   19312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 16:57:06.903345   19312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 16:57:06.903363   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.903700   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.903719   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.906917   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.907327   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.907348   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.907416   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.908089   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.908126   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.908465   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.908653   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.908817   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.908954   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.909373   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34241
	I0729 16:57:06.909484   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41405
	I0729 16:57:06.910511   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.910511   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.910979   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.910997   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.911083   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.911097   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.912841   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.912848   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.912849   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40417
	I0729 16:57:06.912903   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40143
	I0729 16:57:06.912975   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44807
	I0729 16:57:06.913185   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.913266   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.913659   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.913686   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.913870   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45553
	I0729 16:57:06.913878   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.914557   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.914647   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.914670   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.915008   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.915152   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.915175   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.915510   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.915567   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0729 16:57:06.915594   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.915610   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.915679   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36195
	I0729 16:57:06.915771   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.915907   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.915980   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.916242   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.916249   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.916261   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.916314   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:06.916326   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:06.917751   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I0729 16:57:06.917764   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.917808   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:06.917815   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:06.917824   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:06.917830   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:06.917750   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.917900   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.918173   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.918183   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.918258   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:06.918270   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.918287   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:06.918290   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.918296   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:06.918354   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	W0729 16:57:06.918380   19312 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0729 16:57:06.918824   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.918839   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.918898   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.919168   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.919287   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.919523   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.919547   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.919577   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.919806   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.920078   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.920620   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.920641   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.920964   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.921215   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:06.921490   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.921518   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.921561   19312 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 16:57:06.921600   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.921628   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.921855   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.922592   19312 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 16:57:06.922609   19312 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 16:57:06.922612   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.922626   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.923716   19312 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0729 16:57:06.924967   19312 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0729 16:57:06.925784   19312 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0729 16:57:06.925802   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0729 16:57:06.925818   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.925820   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.926354   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.926389   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.926538   19312 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 16:57:06.926550   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 16:57:06.926564   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.927126   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.927324   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.927604   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.927926   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.928243   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0729 16:57:06.928650   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.929204   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.929231   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.929572   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.929727   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.931352   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.931649   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.932175   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.932358   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.932375   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.932528   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.932785   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.932811   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.932836   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.933029   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.933116   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.933363   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.933657   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.933793   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42489
	I0729 16:57:06.933872   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.934008   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.934375   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.934445   19312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 16:57:06.935025   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.935044   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.935353   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.935952   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:06.935987   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:06.937403   19312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 16:57:06.939228   19312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 16:57:06.940495   19312 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 16:57:06.940521   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 16:57:06.940538   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.943540   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.943873   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.943893   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.944204   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.944436   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.944600   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.944762   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.948559   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I0729 16:57:06.948678   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33693
	I0729 16:57:06.948998   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.949189   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.949433   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.949449   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.949759   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.949776   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.949840   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.950210   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.950260   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.950486   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.952238   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.952290   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.954394   19312 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0729 16:57:06.954491   19312 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 16:57:06.955517   19312 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 16:57:06.955540   19312 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 16:57:06.955558   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.956194   19312 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 16:57:06.956208   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 16:57:06.956224   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.956522   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I0729 16:57:06.957287   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I0729 16:57:06.957445   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.957837   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.958123   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.958139   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.958624   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.958640   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.958997   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.959192   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.959248   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45551
	I0729 16:57:06.959396   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.959580   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.959645   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.959707   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.959838   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.959857   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.960007   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.960229   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.960404   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.960457   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.960481   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44693
	I0729 16:57:06.960698   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.960719   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.960731   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.962460   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.962525   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.962610   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.962979   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.962992   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.963054   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.963126   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.963173   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.963227   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.963241   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.963266   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.963519   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.963575   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.963748   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.963997   19312 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 16:57:06.965094   19312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 16:57:06.965108   19312 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 16:57:06.965123   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.965190   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.965725   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.966444   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.966771   19312 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 16:57:06.966783   19312 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 16:57:06.966797   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.967527   19312 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 16:57:06.967552   19312 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:57:06.968730   19312 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 16:57:06.968746   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 16:57:06.968772   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.968847   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.968915   19312 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:57:06.968930   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 16:57:06.968946   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.969364   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.969400   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.969851   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41757
	I0729 16:57:06.969896   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.970223   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.970404   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.970541   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.970817   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.971284   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.971300   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.971788   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.972638   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.972642   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.973109   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.973128   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.973157   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.973347   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.973501   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.973643   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.973783   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.974039   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.974060   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.974262   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.974438   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.974577   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.974722   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.975135   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.975588   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.975658   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.975793   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
	I0729 16:57:06.976056   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.976125   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.976201   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.976461   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.976661   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.976677   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.976754   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.977001   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.977157   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.977207   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.978181   19312 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 16:57:06.979207   19312 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 16:57:06.979559   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34881
	I0729 16:57:06.979915   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:06.980276   19312 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 16:57:06.980290   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 16:57:06.980301   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.980318   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:06.980343   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:06.981143   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:06.981330   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:06.983033   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:06.983087   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.983394   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.983417   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.983556   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.983715   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.983857   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.983961   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:06.984457   19312 out.go:177]   - Using image docker.io/busybox:stable
	I0729 16:57:06.985548   19312 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0729 16:57:06.986635   19312 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 16:57:06.986648   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 16:57:06.986662   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:06.989224   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.989638   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:06.989663   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:06.989807   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:06.989984   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:06.990135   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:06.990273   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:07.297462   19312 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 16:57:07.297484   19312 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 16:57:07.408626   19312 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0729 16:57:07.408650   19312 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0729 16:57:07.409109   19312 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 16:57:07.409142   19312 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 16:57:07.424659   19312 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 16:57:07.424679   19312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 16:57:07.456352   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 16:57:07.486037   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 16:57:07.500014   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 16:57:07.501860   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 16:57:07.515215   19312 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 16:57:07.515238   19312 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 16:57:07.518182   19312 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 16:57:07.518204   19312 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 16:57:07.521101   19312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:57:07.521129   19312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 16:57:07.523436   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:57:07.525728   19312 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 16:57:07.525746   19312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 16:57:07.562242   19312 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 16:57:07.562263   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 16:57:07.564214   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 16:57:07.565547   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 16:57:07.576715   19312 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 16:57:07.576741   19312 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 16:57:07.603695   19312 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 16:57:07.603717   19312 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0729 16:57:07.620312   19312 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 16:57:07.620334   19312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 16:57:07.633851   19312 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 16:57:07.633877   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 16:57:07.645011   19312 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 16:57:07.645044   19312 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 16:57:07.719191   19312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 16:57:07.719215   19312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 16:57:07.729253   19312 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 16:57:07.729272   19312 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 16:57:07.734429   19312 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 16:57:07.734446   19312 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 16:57:07.758295   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 16:57:07.785630   19312 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 16:57:07.785652   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 16:57:07.815154   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 16:57:07.828403   19312 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 16:57:07.828437   19312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 16:57:07.878577   19312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 16:57:07.878606   19312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 16:57:07.879248   19312 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 16:57:07.879269   19312 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 16:57:07.924213   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 16:57:08.013979   19312 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 16:57:08.014005   19312 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 16:57:08.048829   19312 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 16:57:08.048849   19312 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 16:57:08.073430   19312 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 16:57:08.073457   19312 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 16:57:08.132628   19312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 16:57:08.132651   19312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 16:57:08.263468   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 16:57:08.273098   19312 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 16:57:08.273127   19312 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 16:57:08.299081   19312 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 16:57:08.299101   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 16:57:08.481849   19312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 16:57:08.481873   19312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 16:57:08.485222   19312 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 16:57:08.485239   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 16:57:08.704163   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 16:57:08.748023   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 16:57:08.968985   19312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 16:57:08.969007   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 16:57:09.108510   19312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 16:57:09.108541   19312 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 16:57:09.302347   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.845959947s)
	I0729 16:57:09.302403   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:09.302413   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:09.302758   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:09.302806   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:09.302815   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:09.302830   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:09.302840   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:09.303108   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:09.303130   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:09.303153   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:09.417796   19312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 16:57:09.417823   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 16:57:09.755920   19312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 16:57:09.755943   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 16:57:10.114626   19312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 16:57:10.114653   19312 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 16:57:10.273089   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.787020888s)
	I0729 16:57:10.273142   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:10.273152   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:10.273406   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:10.273465   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:10.273483   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:10.273499   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:10.273511   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:10.273765   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:10.274296   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:10.274317   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:10.463534   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 16:57:11.019376   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.519329979s)
	I0729 16:57:11.019420   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:11.019417   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.517537534s)
	I0729 16:57:11.019431   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:11.019441   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:11.019451   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:11.019471   19312 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.498312306s)
	I0729 16:57:11.019501   19312 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.498380294s)
	I0729 16:57:11.019500   19312 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 16:57:11.019799   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:11.019817   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:11.019826   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:11.019835   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:11.019883   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:11.019921   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:11.019941   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:11.019957   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:11.020366   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:11.020384   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:11.020568   19312 node_ready.go:35] waiting up to 6m0s for node "addons-433102" to be "Ready" ...
	I0729 16:57:11.020633   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:11.020657   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:11.020666   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:11.052350   19312 node_ready.go:49] node "addons-433102" has status "Ready":"True"
	I0729 16:57:11.052370   19312 node_ready.go:38] duration metric: took 31.783838ms for node "addons-433102" to be "Ready" ...
	I0729 16:57:11.052378   19312 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 16:57:11.107413   19312 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:11.135533   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:11.135560   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:11.135845   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:11.135864   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:11.627560   19312 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-433102" context rescaled to 1 replicas
	I0729 16:57:12.492905   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.969439321s)
	I0729 16:57:12.492968   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:12.492980   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:12.492990   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.928741596s)
	I0729 16:57:12.493027   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:12.493043   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:12.493245   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:12.493348   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:12.493364   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:12.493373   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:12.493380   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:12.493353   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:12.493311   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:12.493287   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:12.493410   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:12.493499   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:12.493581   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:12.493594   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:12.493802   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:12.493816   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:12.591070   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:12.591090   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:12.591361   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:12.591379   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:13.130288   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:13.988798   19312 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 16:57:13.988835   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:13.992125   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:13.992589   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:13.992614   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:13.992785   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:13.992990   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:13.993155   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:13.993298   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:14.384748   19312 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 16:57:14.447148   19312 addons.go:234] Setting addon gcp-auth=true in "addons-433102"
	I0729 16:57:14.447204   19312 host.go:66] Checking if "addons-433102" exists ...
	I0729 16:57:14.447526   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:14.447553   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:14.463567   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37911
	I0729 16:57:14.463985   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:14.464504   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:14.464525   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:14.464855   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:14.465475   19312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 16:57:14.465507   19312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 16:57:14.481099   19312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0729 16:57:14.481616   19312 main.go:141] libmachine: () Calling .GetVersion
	I0729 16:57:14.482085   19312 main.go:141] libmachine: Using API Version  1
	I0729 16:57:14.482106   19312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 16:57:14.482514   19312 main.go:141] libmachine: () Calling .GetMachineName
	I0729 16:57:14.482695   19312 main.go:141] libmachine: (addons-433102) Calling .GetState
	I0729 16:57:14.484438   19312 main.go:141] libmachine: (addons-433102) Calling .DriverName
	I0729 16:57:14.484643   19312 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 16:57:14.484666   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHHostname
	I0729 16:57:14.487307   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:14.487739   19312 main.go:141] libmachine: (addons-433102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:3f:00", ip: ""} in network mk-addons-433102: {Iface:virbr1 ExpiryTime:2024-07-29 17:56:27 +0000 UTC Type:0 Mac:52:54:00:d8:3f:00 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:addons-433102 Clientid:01:52:54:00:d8:3f:00}
	I0729 16:57:14.487763   19312 main.go:141] libmachine: (addons-433102) DBG | domain addons-433102 has defined IP address 192.168.39.73 and MAC address 52:54:00:d8:3f:00 in network mk-addons-433102
	I0729 16:57:14.487869   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHPort
	I0729 16:57:14.488007   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHKeyPath
	I0729 16:57:14.488174   19312 main.go:141] libmachine: (addons-433102) Calling .GetSSHUsername
	I0729 16:57:14.488316   19312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/addons-433102/id_rsa Username:docker}
	I0729 16:57:15.393477   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.827898983s)
	I0729 16:57:15.393524   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.393533   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.393553   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.63522486s)
	I0729 16:57:15.393594   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.393605   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.578418649s)
	I0729 16:57:15.393638   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.393654   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.393610   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.393699   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.469454514s)
	I0729 16:57:15.393724   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.393733   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.393775   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.130278515s)
	I0729 16:57:15.393799   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.393811   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.393926   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.393950   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.393958   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.393962   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.393985   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.393994   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.394002   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.394008   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.394058   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.394082   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.394088   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.394098   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.394106   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.394165   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.394188   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.394195   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.394202   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.394208   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.394241   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.394258   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.394277   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.394285   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.394294   19312 addons.go:475] Verifying addon ingress=true in "addons-433102"
	I0729 16:57:15.394647   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.69044496s)
	W0729 16:57:15.394690   19312 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 16:57:15.394717   19312 retry.go:31] will retry after 199.459612ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 16:57:15.394801   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.646748098s)
	I0729 16:57:15.394818   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.394863   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.394923   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.394950   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.394956   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.395153   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.395166   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.395193   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.395201   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.395728   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.395761   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.395768   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.395775   19312 addons.go:475] Verifying addon metrics-server=true in "addons-433102"
	I0729 16:57:15.395932   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.395967   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.395980   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.393965   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.396179   19312 out.go:177] * Verifying ingress addon...
	I0729 16:57:15.396208   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.396236   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.396243   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.396250   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:15.396257   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:15.396842   19312 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-433102 service yakd-dashboard -n yakd-dashboard
	
	I0729 16:57:15.397074   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.398335   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.397103   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.397165   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:15.398438   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:15.398447   19312 addons.go:475] Verifying addon registry=true in "addons-433102"
	I0729 16:57:15.398484   19312 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 16:57:15.397496   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:15.399433   19312 out.go:177] * Verifying registry addon...
	I0729 16:57:15.401193   19312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 16:57:15.423712   19312 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 16:57:15.423732   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:15.423896   19312 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 16:57:15.423914   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:15.594578   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 16:57:15.613425   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:15.903568   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:15.908862   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:16.417671   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:16.431413   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:16.602384   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.138759617s)
	I0729 16:57:16.602400   19312 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.11774003s)
	I0729 16:57:16.602435   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:16.602447   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:16.602747   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:16.602785   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:16.602808   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:16.602810   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:16.602899   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:16.603133   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:16.603179   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:16.603187   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:16.603196   19312 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-433102"
	I0729 16:57:16.604315   19312 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 16:57:16.604381   19312 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 16:57:16.606034   19312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 16:57:16.606951   19312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 16:57:16.609075   19312 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 16:57:16.609098   19312 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 16:57:16.646493   19312 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 16:57:16.646515   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:16.720570   19312 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 16:57:16.720599   19312 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 16:57:16.774222   19312 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 16:57:16.774243   19312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 16:57:16.810790   19312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 16:57:16.903307   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:16.907714   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:17.115407   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:17.403841   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:17.408300   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:17.614642   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:17.621216   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:17.784909   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.190283262s)
	I0729 16:57:17.784963   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:17.784985   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:17.785362   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:17.785401   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:17.785412   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:17.785423   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:17.785447   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:17.785687   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:17.785740   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:17.785752   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:17.903881   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:17.926744   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:18.171326   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:18.266967   19312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.456140834s)
	I0729 16:57:18.267029   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:18.267043   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:18.267406   19312 main.go:141] libmachine: (addons-433102) DBG | Closing plugin on server side
	I0729 16:57:18.267415   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:18.267432   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:18.267444   19312 main.go:141] libmachine: Making call to close driver server
	I0729 16:57:18.267458   19312 main.go:141] libmachine: (addons-433102) Calling .Close
	I0729 16:57:18.267682   19312 main.go:141] libmachine: Successfully made call to close driver server
	I0729 16:57:18.267739   19312 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 16:57:18.269844   19312 addons.go:475] Verifying addon gcp-auth=true in "addons-433102"
	I0729 16:57:18.271303   19312 out.go:177] * Verifying gcp-auth addon...
	I0729 16:57:18.273680   19312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 16:57:18.325271   19312 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 16:57:18.325290   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:18.405046   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:18.422894   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:18.614051   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:18.778293   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:18.903612   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:18.908258   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:19.193513   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:19.277474   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:19.408526   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:19.416265   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:19.613839   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:19.779219   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:19.904099   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:19.907010   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:20.123984   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:20.125246   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:20.278537   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:20.403683   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:20.408831   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:20.612898   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:20.777843   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:20.902537   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:20.905192   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:21.113001   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:21.277864   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:21.402610   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:21.404960   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:21.612680   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:21.777088   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:21.902807   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:21.906852   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:22.114122   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:22.283120   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:22.403371   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:22.405809   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:22.695950   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:22.701455   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:22.779121   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:22.903390   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:22.905223   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:23.115858   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:23.278140   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:23.403990   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:23.407018   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:23.613953   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:23.614565   19312 pod_ready.go:97] pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:23 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:07 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:07 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:07 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.73 HostIPs:[{IP:192.168.39.
73}] PodIP: PodIPs:[] StartTime:2024-07-29 16:57:07 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-29 16:57:12 +0000 UTC,FinishedAt:2024-07-29 16:57:22 +0000 UTC,ContainerID:cri-o://47e73e792b774d9238a1fb14fafa9aebc4040430d69afa682d4e31b1270ec754,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://47e73e792b774d9238a1fb14fafa9aebc4040430d69afa682d4e31b1270ec754 Started:0xc001bafb00 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 16:57:23.614590   19312 pod_ready.go:81] duration metric: took 12.507153955s for pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace to be "Ready" ...
	E0729 16:57:23.614604   19312 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-5kgv7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:23 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:07 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:07 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:07 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 16:57:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.73 HostIPs:[{IP:192.168.39.73}] PodIP: PodIPs:[] StartTime:2024-07-29 16:57:07 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-29 16:57:12 +0000 UTC,FinishedAt:2024-07-29 16:57:22 +0000 UTC,ContainerID:cri-o://47e73e792b774d9238a1fb14fafa9aebc4040430d69afa682d4e31b1270ec754,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://47e73e792b774d9238a1fb14fafa9aebc4040430d69afa682d4e31b1270ec754 Started:0xc001bafb00 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 16:57:23.614613   19312 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:23.777914   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:23.902844   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:23.905621   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:24.224642   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:24.277575   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:24.405796   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:24.405825   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:24.613195   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:24.777773   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:24.902714   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:24.905362   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:25.112474   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:25.279368   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:25.403302   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:25.406213   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:25.613035   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:25.619980   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:25.777065   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:25.902884   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:25.905350   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:26.112358   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:26.277144   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:26.402718   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:26.405557   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:26.612239   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:26.777061   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:26.902679   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:26.905375   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:27.686324   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:27.686774   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:27.690140   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:27.690563   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:27.691125   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:27.696218   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:27.777885   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:27.902515   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:27.905330   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:28.112117   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:28.277451   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:28.403392   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:28.406209   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:28.611440   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:28.779451   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:28.903222   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:28.905586   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:29.112492   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:29.277476   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:29.403167   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:29.408534   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:29.612054   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:29.777447   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:29.904379   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:29.906122   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:30.113510   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:30.119838   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:30.277045   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:30.403451   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:30.406227   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:30.614949   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:30.857320   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:30.903696   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:30.911626   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:31.113630   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:31.279127   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:31.402639   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:31.407704   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:31.614027   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:31.777082   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:31.909121   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:31.911681   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:32.117666   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:32.128537   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:32.278196   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:32.403892   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:32.406590   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:32.612910   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:32.777283   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:32.903110   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:32.905490   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:33.111764   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:33.277343   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:33.404097   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:33.406393   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:33.613046   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:33.776967   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:33.902616   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:33.905602   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:34.112779   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:34.277746   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:34.403568   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:34.406667   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:34.612680   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:34.620199   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:34.777553   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:34.985527   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:34.985970   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:35.113845   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:35.277318   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:35.405007   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:35.413462   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:35.612797   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:35.777283   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:35.904117   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:35.907818   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:36.112389   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:36.277717   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:36.420787   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:36.421927   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:36.613000   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:36.623153   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:36.777136   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:36.903635   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:36.910708   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:37.113089   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:37.277142   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:37.402841   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:37.405547   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:37.612151   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:37.782575   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:37.905288   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:37.908431   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:38.167872   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:38.381328   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:38.403328   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:38.411816   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:38.613236   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:38.777206   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:38.903188   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:38.906712   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:39.112883   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:39.120726   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:39.277758   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:39.402942   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:39.405737   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:39.612767   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:39.777788   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:39.904433   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:39.909108   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:40.113148   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:40.277793   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:40.402504   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:40.405276   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:40.611795   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:40.777735   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:40.902618   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:40.905334   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:41.119939   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:41.123696   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:41.278414   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:41.403233   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:41.406557   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:41.614897   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:41.784850   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:41.902500   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:41.911161   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:42.122719   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:42.283705   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:42.405069   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:42.409012   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:42.618797   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:42.777876   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:42.906548   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:42.908532   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:43.113780   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:43.130544   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:43.278006   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:43.402887   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:43.412910   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:43.612565   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:43.777555   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:43.903275   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:43.905651   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:44.113731   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:44.278675   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:44.403606   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:44.405716   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:44.613168   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:44.777760   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:44.902801   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:44.907920   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:45.113060   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:45.278677   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:45.402719   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:45.405187   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:45.613123   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:45.620842   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:45.778692   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:45.903122   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:45.906138   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:46.113059   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:46.277495   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:46.405608   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:46.405667   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:46.612562   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:46.777841   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:46.903480   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:46.907870   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 16:57:47.113268   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:47.277534   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:47.407963   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:47.416666   19312 kapi.go:107] duration metric: took 32.015468158s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 16:57:47.796433   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:47.798297   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:47.799488   19312 pod_ready.go:102] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"False"
	I0729 16:57:47.903213   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:48.113820   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:48.277097   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:48.402456   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:48.612623   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:48.619322   19312 pod_ready.go:92] pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace has status "Ready":"True"
	I0729 16:57:48.619345   19312 pod_ready.go:81] duration metric: took 25.004722524s for pod "coredns-7db6d8ff4d-chxlc" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.619356   19312 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-433102" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.623647   19312 pod_ready.go:92] pod "etcd-addons-433102" in "kube-system" namespace has status "Ready":"True"
	I0729 16:57:48.623667   19312 pod_ready.go:81] duration metric: took 4.304122ms for pod "etcd-addons-433102" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.623677   19312 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-433102" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.627978   19312 pod_ready.go:92] pod "kube-apiserver-addons-433102" in "kube-system" namespace has status "Ready":"True"
	I0729 16:57:48.627994   19312 pod_ready.go:81] duration metric: took 4.309385ms for pod "kube-apiserver-addons-433102" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.628004   19312 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-433102" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.632988   19312 pod_ready.go:92] pod "kube-controller-manager-addons-433102" in "kube-system" namespace has status "Ready":"True"
	I0729 16:57:48.633006   19312 pod_ready.go:81] duration metric: took 4.994019ms for pod "kube-controller-manager-addons-433102" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.633018   19312 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6wcxr" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.638373   19312 pod_ready.go:92] pod "kube-proxy-6wcxr" in "kube-system" namespace has status "Ready":"True"
	I0729 16:57:48.638392   19312 pod_ready.go:81] duration metric: took 5.367654ms for pod "kube-proxy-6wcxr" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.638403   19312 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-433102" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:48.777331   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:48.905683   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:49.019228   19312 pod_ready.go:92] pod "kube-scheduler-addons-433102" in "kube-system" namespace has status "Ready":"True"
	I0729 16:57:49.019256   19312 pod_ready.go:81] duration metric: took 380.843864ms for pod "kube-scheduler-addons-433102" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:49.019270   19312 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-w9bhg" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:49.113500   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:49.376931   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:49.402071   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:49.417970   19312 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-w9bhg" in "kube-system" namespace has status "Ready":"True"
	I0729 16:57:49.417991   19312 pod_ready.go:81] duration metric: took 398.711328ms for pod "nvidia-device-plugin-daemonset-w9bhg" in "kube-system" namespace to be "Ready" ...
	I0729 16:57:49.418008   19312 pod_ready.go:38] duration metric: took 38.365617846s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 16:57:49.418025   19312 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:57:49.418076   19312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:57:49.451649   19312 api_server.go:72] duration metric: took 42.640369496s to wait for apiserver process to appear ...
	I0729 16:57:49.451671   19312 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:57:49.451689   19312 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8443/healthz ...
	I0729 16:57:49.455914   19312 api_server.go:279] https://192.168.39.73:8443/healthz returned 200:
	ok
	I0729 16:57:49.457137   19312 api_server.go:141] control plane version: v1.30.3
	I0729 16:57:49.457160   19312 api_server.go:131] duration metric: took 5.483086ms to wait for apiserver health ...
	I0729 16:57:49.457168   19312 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 16:57:49.611937   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:49.624995   19312 system_pods.go:59] 18 kube-system pods found
	I0729 16:57:49.625019   19312 system_pods.go:61] "coredns-7db6d8ff4d-chxlc" [13483151-7a93-4b7e-bc8a-a0df4c049a67] Running
	I0729 16:57:49.625026   19312 system_pods.go:61] "csi-hostpath-attacher-0" [2c1c2c8c-4978-4a46-9e3b-dd66cdeeb31d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 16:57:49.625032   19312 system_pods.go:61] "csi-hostpath-resizer-0" [70844275-2cb5-4ef3-81cb-5e638a9d1107] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0729 16:57:49.625040   19312 system_pods.go:61] "csi-hostpathplugin-v9jld" [c81085b2-ef2e-48d1-b265-1becf684440b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 16:57:49.625044   19312 system_pods.go:61] "etcd-addons-433102" [06021977-6eba-44af-9f49-543aa605fdcd] Running
	I0729 16:57:49.625048   19312 system_pods.go:61] "kube-apiserver-addons-433102" [a737c877-f452-4e11-8665-567d05e884a3] Running
	I0729 16:57:49.625051   19312 system_pods.go:61] "kube-controller-manager-addons-433102" [7551355b-d9b5-4d57-b372-afbaadbd14fc] Running
	I0729 16:57:49.625054   19312 system_pods.go:61] "kube-ingress-dns-minikube" [e7277800-f99a-44f9-8098-4c1bf978bf95] Running
	I0729 16:57:49.625057   19312 system_pods.go:61] "kube-proxy-6wcxr" [508ba4dd-e6d5-438e-a66c-0188b555f367] Running
	I0729 16:57:49.625060   19312 system_pods.go:61] "kube-scheduler-addons-433102" [617259cb-04ad-4c62-99e8-b71aeb4ef8c3] Running
	I0729 16:57:49.625064   19312 system_pods.go:61] "metrics-server-c59844bb4-fdwdm" [377d84f1-430a-423a-8e08-3ffc0e083b56] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 16:57:49.625067   19312 system_pods.go:61] "nvidia-device-plugin-daemonset-w9bhg" [56c0414f-7d09-4189-9d58-7fc65a0d5eb8] Running
	I0729 16:57:49.625070   19312 system_pods.go:61] "registry-656c9c8d9c-bz6n2" [61225496-6f2a-48fa-b4f8-eab75fc915ba] Running
	I0729 16:57:49.625073   19312 system_pods.go:61] "registry-proxy-wnpcd" [5728a955-abcb-481c-8e81-300240983718] Running
	I0729 16:57:49.625077   19312 system_pods.go:61] "snapshot-controller-745499f584-9x5dq" [35e5ddb5-9e5a-4719-9b39-28d96d5b035a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 16:57:49.625082   19312 system_pods.go:61] "snapshot-controller-745499f584-hkqrc" [2efba456-3d43-4aa8-8262-f2a98c962296] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 16:57:49.625087   19312 system_pods.go:61] "storage-provisioner" [bb738aeb-40ec-47f1-9422-8c2a64cb1b38] Running
	I0729 16:57:49.625090   19312 system_pods.go:61] "tiller-deploy-6677d64bcd-dvkm9" [8c867f82-b890-4ac8-aa2d-74386a1f3bdb] Running
	I0729 16:57:49.625094   19312 system_pods.go:74] duration metric: took 167.922433ms to wait for pod list to return data ...
	I0729 16:57:49.625100   19312 default_sa.go:34] waiting for default service account to be created ...
	I0729 16:57:49.776690   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:49.818000   19312 default_sa.go:45] found service account: "default"
	I0729 16:57:49.818021   19312 default_sa.go:55] duration metric: took 192.915569ms for default service account to be created ...
	I0729 16:57:49.818028   19312 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 16:57:49.902832   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:50.029280   19312 system_pods.go:86] 18 kube-system pods found
	I0729 16:57:50.029311   19312 system_pods.go:89] "coredns-7db6d8ff4d-chxlc" [13483151-7a93-4b7e-bc8a-a0df4c049a67] Running
	I0729 16:57:50.029324   19312 system_pods.go:89] "csi-hostpath-attacher-0" [2c1c2c8c-4978-4a46-9e3b-dd66cdeeb31d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 16:57:50.029335   19312 system_pods.go:89] "csi-hostpath-resizer-0" [70844275-2cb5-4ef3-81cb-5e638a9d1107] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0729 16:57:50.029346   19312 system_pods.go:89] "csi-hostpathplugin-v9jld" [c81085b2-ef2e-48d1-b265-1becf684440b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 16:57:50.029354   19312 system_pods.go:89] "etcd-addons-433102" [06021977-6eba-44af-9f49-543aa605fdcd] Running
	I0729 16:57:50.029365   19312 system_pods.go:89] "kube-apiserver-addons-433102" [a737c877-f452-4e11-8665-567d05e884a3] Running
	I0729 16:57:50.029371   19312 system_pods.go:89] "kube-controller-manager-addons-433102" [7551355b-d9b5-4d57-b372-afbaadbd14fc] Running
	I0729 16:57:50.029382   19312 system_pods.go:89] "kube-ingress-dns-minikube" [e7277800-f99a-44f9-8098-4c1bf978bf95] Running
	I0729 16:57:50.029387   19312 system_pods.go:89] "kube-proxy-6wcxr" [508ba4dd-e6d5-438e-a66c-0188b555f367] Running
	I0729 16:57:50.029393   19312 system_pods.go:89] "kube-scheduler-addons-433102" [617259cb-04ad-4c62-99e8-b71aeb4ef8c3] Running
	I0729 16:57:50.029406   19312 system_pods.go:89] "metrics-server-c59844bb4-fdwdm" [377d84f1-430a-423a-8e08-3ffc0e083b56] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 16:57:50.029416   19312 system_pods.go:89] "nvidia-device-plugin-daemonset-w9bhg" [56c0414f-7d09-4189-9d58-7fc65a0d5eb8] Running
	I0729 16:57:50.029428   19312 system_pods.go:89] "registry-656c9c8d9c-bz6n2" [61225496-6f2a-48fa-b4f8-eab75fc915ba] Running
	I0729 16:57:50.029435   19312 system_pods.go:89] "registry-proxy-wnpcd" [5728a955-abcb-481c-8e81-300240983718] Running
	I0729 16:57:50.029446   19312 system_pods.go:89] "snapshot-controller-745499f584-9x5dq" [35e5ddb5-9e5a-4719-9b39-28d96d5b035a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 16:57:50.029458   19312 system_pods.go:89] "snapshot-controller-745499f584-hkqrc" [2efba456-3d43-4aa8-8262-f2a98c962296] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 16:57:50.029471   19312 system_pods.go:89] "storage-provisioner" [bb738aeb-40ec-47f1-9422-8c2a64cb1b38] Running
	I0729 16:57:50.029481   19312 system_pods.go:89] "tiller-deploy-6677d64bcd-dvkm9" [8c867f82-b890-4ac8-aa2d-74386a1f3bdb] Running
	I0729 16:57:50.029491   19312 system_pods.go:126] duration metric: took 211.456472ms to wait for k8s-apps to be running ...
	I0729 16:57:50.029501   19312 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 16:57:50.029545   19312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 16:57:50.066855   19312 system_svc.go:56] duration metric: took 37.344862ms WaitForService to wait for kubelet
	I0729 16:57:50.066890   19312 kubeadm.go:582] duration metric: took 43.255612143s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:57:50.066924   19312 node_conditions.go:102] verifying NodePressure condition ...
	I0729 16:57:50.113137   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:50.220565   19312 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 16:57:50.220600   19312 node_conditions.go:123] node cpu capacity is 2
	I0729 16:57:50.220616   19312 node_conditions.go:105] duration metric: took 153.68561ms to run NodePressure ...
	I0729 16:57:50.220632   19312 start.go:241] waiting for startup goroutines ...
	I0729 16:57:50.277341   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:50.404091   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:50.618337   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:50.778404   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:50.903526   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:51.113705   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:51.277681   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:51.403284   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:51.613472   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:51.777435   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:51.903174   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:52.113546   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:52.277321   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:52.407831   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:52.612931   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:52.777040   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:52.902990   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:53.115382   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:53.277242   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:53.402951   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:53.612944   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:53.777708   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:53.911597   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:54.113632   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:54.277715   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:54.403341   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:54.612599   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:54.777333   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:54.922509   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:55.113042   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:55.276932   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:55.403023   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:55.613653   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:55.778091   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:55.903112   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:56.116994   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:56.277507   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:56.403772   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:56.612287   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:56.778207   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:56.903411   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:57.114557   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:57.281637   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:57.403018   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:57.614772   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:57.777964   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:57.902724   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:58.113413   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:58.277957   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:58.403072   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:58.612596   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:58.777511   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:58.903483   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:59.125122   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:59.278399   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:59.403341   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:57:59.612740   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:57:59.777024   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:57:59.902675   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:00.111543   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:00.277170   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:00.402625   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:00.611969   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:00.780555   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:00.904014   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:01.113352   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:01.279201   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:01.402753   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:01.612615   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:01.777524   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:01.903226   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:02.116390   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:02.564907   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:02.741072   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:02.741396   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:02.777350   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:02.903682   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:03.112919   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:03.283207   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:03.402190   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:03.615514   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:03.777326   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:03.905402   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:04.112814   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:04.278043   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:04.409764   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:04.612755   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:04.783699   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:04.902226   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:05.120576   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:05.277637   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:05.412297   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:05.643919   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:05.777089   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:05.903430   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:06.117111   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:06.277410   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:06.402662   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:06.621146   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:06.780092   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:06.904568   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:07.113183   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:07.277276   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:07.403679   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:07.612526   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:07.780582   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:07.903396   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:08.116585   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:08.283785   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:08.404554   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:08.613383   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:08.778535   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:08.910205   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:09.114636   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:09.277358   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:09.403280   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:09.612948   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:09.777293   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:09.903206   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:10.113470   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:10.278433   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:10.403658   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:10.612369   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:10.777416   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:10.904894   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:11.115286   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:11.277859   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:11.403459   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:11.612829   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:11.777687   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:11.903521   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:12.112192   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:12.278162   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:12.405075   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:12.612372   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 16:58:12.779487   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:12.903135   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:13.114043   19312 kapi.go:107] duration metric: took 56.507089151s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 16:58:13.277163   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:13.402972   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:13.777091   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:13.904813   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:14.277454   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:14.403230   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:14.777283   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:14.903661   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:15.277560   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:15.402902   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:15.777544   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:15.903483   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:16.277174   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:16.403371   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:16.776743   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:16.902900   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:17.277772   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:17.402189   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:17.781333   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:17.903250   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:18.277295   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:18.403332   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:18.777227   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:18.903916   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:19.280730   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:19.402847   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:19.777772   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:19.902599   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:20.277347   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:20.403808   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:20.778243   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:20.903474   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:21.276942   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:21.405117   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:21.777924   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:21.903401   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:22.277721   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:22.402721   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:22.825116   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:22.903106   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:23.277059   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:23.402689   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:23.777830   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:23.902687   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:24.277892   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:24.402368   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:24.781025   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:24.902844   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:25.277628   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:25.403544   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:25.777472   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:25.903522   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:26.277469   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:26.403361   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:26.777150   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:26.903707   19312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 16:58:27.277984   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:27.402771   19312 kapi.go:107] duration metric: took 1m12.004283711s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 16:58:27.777320   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:28.281297   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:28.778156   19312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 16:58:29.278104   19312 kapi.go:107] duration metric: took 1m11.004421424s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 16:58:29.279836   19312 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-433102 cluster.
	I0729 16:58:29.281160   19312 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 16:58:29.282330   19312 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 16:58:29.283554   19312 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, default-storageclass, storage-provisioner, storage-provisioner-rancher, metrics-server, helm-tiller, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0729 16:58:29.284860   19312 addons.go:510] duration metric: took 1m22.473543206s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns default-storageclass storage-provisioner storage-provisioner-rancher metrics-server helm-tiller inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0729 16:58:29.284901   19312 start.go:246] waiting for cluster config update ...
	I0729 16:58:29.284918   19312 start.go:255] writing updated cluster config ...
	I0729 16:58:29.285143   19312 ssh_runner.go:195] Run: rm -f paused
	I0729 16:58:29.332555   19312 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 16:58:29.334470   19312 out.go:177] * Done! kubectl is now configured to use "addons-433102" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.686523690Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3570749d9d79d684051602d352dba7823388fd99b5f9878334cbc6d894862f43,Metadata:&PodSandboxMetadata{Name:hello-world-app-6778b5fc9f-cz2bv,Uid:773dada2-8958-460d-b4f8-53d9981e74ab,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722272518464725827,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-cz2bv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 773dada2-8958-460d-b4f8-53d9981e74ab,pod-template-hash: 6778b5fc9f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T17:01:58.152684460Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:14569a4d3adaa424921ec1367e09008c4a61d82b4d4420f28aa901a13496fca1,Metadata:&PodSandboxMetadata{Name:nginx,Uid:bba16d61-afc5-4c02-85a7-8e1181099d91,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1722272378602026607,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bba16d61-afc5-4c02-85a7-8e1181099d91,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T16:59:38.292394508Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ad0779037f7870e83416e7b3d9c156a46b69c43805d64302b606f8dd75df6fa3,Metadata:&PodSandboxMetadata{Name:busybox,Uid:e3ae3c83-a5c9-4ac7-8e5e-89b7df19295c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722272309911928595,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e3ae3c83-a5c9-4ac7-8e5e-89b7df19295c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T16:58:29.597702332Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:788563bf38e22d42ff
70445a3e7dc5ca86356221a9c50ccbe097c161a570db36,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-fdwdm,Uid:377d84f1-430a-423a-8e08-3ffc0e083b56,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722272232932480612,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-fdwdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377d84f1-430a-423a-8e08-3ffc0e083b56,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T16:57:12.317450539Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3f828acf7d097497af16440cc6cd07ae40d2bc608a845a408412fdda28abb0c7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:bb738aeb-40ec-47f1-9422-8c2a64cb1b38,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722272232796315249,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernet
es.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb738aeb-40ec-47f1-9422-8c2a64cb1b38,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T16:57:12.474994652Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:8fbf004632d3aa84babf678ad0864bf270121c283fcfba5f4aa1a1c73f779ca9,Metadata:&PodSandboxMetadata{Name:kube-proxy-6wcxr,Uid:508ba4dd-e6d5-438e-a66c-0188b555f367,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722272228225681580,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6wcxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 508ba4dd-e6d5-438e-a66c-0188b555f367,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T16:57:07.013582772Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a7d20bcb6427eaaa99e2a808005690bbce7f36eb60d7bb3b5b7689203d9e1cc0,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-chxlc,Uid:13483151-7a93-4b7e-bc8a-a0df4c049a67,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722272228187040518,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.
name: coredns-7db6d8ff4d-chxlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13483151-7a93-4b7e-bc8a-a0df4c049a67,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T16:57:07.267248521Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bf49bf99b816bf5d51496123adba38127de234b880313dc7cbb09e625c7b0906,Metadata:&PodSandboxMetadata{Name:etcd-addons-433102,Uid:5b994742abe79d480c8f0ba290e51e7e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722272207912326349,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b994742abe79d480c8f0ba290e51e7e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.73:2379,kubernetes.io/config.hash: 5b994742abe79d480c8f0ba290e51e7e,kubernetes.io/config.seen: 2024-07-29T16:56
:47.438491262Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1a0140bf9d63825f6e047dd86d15a682e5e28597567507432eedafaa1e785527,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-433102,Uid:2280a372007a9e99150f8ed8e7385ac9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722272207903363745,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2280a372007a9e99150f8ed8e7385ac9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2280a372007a9e99150f8ed8e7385ac9,kubernetes.io/config.seen: 2024-07-29T16:56:47.438490382Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a22c8ef147418ae3b7b41984990582f394249928eb39a2cb517dbe663f43fbb2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-433102,Uid:e6b3873e1eb7772d5b00a12b153cb28c,Namespace:kube-system,Attempt:0,},State:SAN
DBOX_READY,CreatedAt:1722272207898799156,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b3873e1eb7772d5b00a12b153cb28c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e6b3873e1eb7772d5b00a12b153cb28c,kubernetes.io/config.seen: 2024-07-29T16:56:47.438489513Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c7bea62da2e92e649f0efd165cccb1c468a357414fbcbbb361e1db55f1f4bdcf,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-433102,Uid:58abe85a975931c71d2ced52e3a7744c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722272207883743559,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58abe85a975931c71d2ced52e3a7744c,tier: c
ontrol-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.73:8443,kubernetes.io/config.hash: 58abe85a975931c71d2ced52e3a7744c,kubernetes.io/config.seen: 2024-07-29T16:56:47.438486461Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=72d87ec2-fc8a-4a9e-8664-b7a300f6b442 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.687201142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a1d48d7-ae5d-446b-b7c3-f2a281fb2607 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.687269285Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a1d48d7-ae5d-446b-b7c3-f2a281fb2607 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.687508545Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f40ae879569d775a79f117bd872c259ff93223ced84a0e688802eab6411d8c7,PodSandboxId:3570749d9d79d684051602d352dba7823388fd99b5f9878334cbc6d894862f43,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722272519487084714,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-cz2bv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 773dada2-8958-460d-b4f8-53d9981e74ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2be7a527,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552417b813e20a7ce35e6fcea96e06ad05be03de1a0bd835a9bf6528b1b97ed0,PodSandboxId:14569a4d3adaa424921ec1367e09008c4a61d82b4d4420f28aa901a13496fca1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722272380866463815,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bba16d61-afc5-4c02-85a7-8e1181099d91,},Annotations:map[string]string{io.kubernet
es.container.hash: da8d9711,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:168151c1371e9807e41e98e386b805a9c081e1884f375f772acd59461fd1e4e1,PodSandboxId:ad0779037f7870e83416e7b3d9c156a46b69c43805d64302b606f8dd75df6fa3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722272311011477451,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e3ae3c83-a5c9-4ac7-8
e5e-89b7df19295c,},Annotations:map[string]string{io.kubernetes.container.hash: 9aacfe94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba0974d77ed6cae9390e642d6962087e3c80b050cf6bfa114fa6593bde64aee7,PodSandboxId:788563bf38e22d42ff70445a3e7dc5ca86356221a9c50ccbe097c161a570db36,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722272261121646806,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-fdwdm,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 377d84f1-430a-423a-8e08-3ffc0e083b56,},Annotations:map[string]string{io.kubernetes.container.hash: b69a97df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2599cbcd1abdc6363e42cd84b81942c2062fb81ff763fe53dd406df7addc2b42,PodSandboxId:3f828acf7d097497af16440cc6cd07ae40d2bc608a845a408412fdda28abb0c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722272233545936328,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb738aeb-40ec-47f1-9422-8c2a64cb1b38,},Annotations:map[string]string{io.kubernetes.container.hash: 5db4b699,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36bd6adcb73e3db92ab8716fd9db0d1c3a693fba74911af5ad6739dd21be75cb,PodSandboxId:a7d20bcb6427eaaa99e2a808005690bbce7f36eb60d7bb3b5b7689203d9e1cc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722272231701006614,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-chxlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13483151-7a93-4b7e-bc8a-a0df4c049a67,},Annotations:map[string]string{io.kubernetes.container.hash: 82a81137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a55b409fab4e27959260804b6052797f442f48d1411d6f5b444548fa1720f7d,PodSandboxId:8fbf004632d3aa84babf678ad0864bf270121c283fcfba5f4aa1a1c73f779ca9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722272229216967769,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6wcxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 508ba4dd-e6d5-438e-a66c-0188b555f367,},Annotations:map[string]string{io.kubernetes.container.hash: 4e980fe0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316657090f5cffcd0503b91d427543c65807dac517373699e629c60a47444356,PodSandboxId:a22c8ef147418ae3b7b41984990582f394249928eb39a2cb517dbe663f43fbb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd27
3badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722272208145849572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b3873e1eb7772d5b00a12b153cb28c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79298bbe1b2337e09867ac114b4875e36699d0eb1ba5b9725b468712ac570005,PodSandboxId:1a0140bf9d63825f6e047dd86d15a682e5e28597567507432eedafaa1e785527,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d
6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722272208173742099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2280a372007a9e99150f8ed8e7385ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654027072dc843ca69f02c3c234a1eae4c56b9d9447ece343121a56ed3166d37,PodSandboxId:bf49bf99b816bf5d51496123adba38127de234b880313dc7cbb09e625c7b0906,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,S
tate:CONTAINER_RUNNING,CreatedAt:1722272208088937677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b994742abe79d480c8f0ba290e51e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8aba7843,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e54d28c2754d2aa11d61943c6e860b690ffc023075a34001580b78070aae803,PodSandboxId:c7bea62da2e92e649f0efd165cccb1c468a357414fbcbbb361e1db55f1f4bdcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
272208090472348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58abe85a975931c71d2ced52e3a7744c,},Annotations:map[string]string{io.kubernetes.container.hash: 91dad007,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a1d48d7-ae5d-446b-b7c3-f2a281fb2607 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.702875797Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98437047-08cb-470b-b3d8-406dc8ade21c name=/runtime.v1.RuntimeService/Version
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.702936723Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98437047-08cb-470b-b3d8-406dc8ade21c name=/runtime.v1.RuntimeService/Version
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.704066183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6da72f26-4790-4cad-93a5-f0d6d33deb28 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.705526355Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722272698705501449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6da72f26-4790-4cad-93a5-f0d6d33deb28 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.706347730Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29fb868b-020b-44aa-9903-02b255ba38be name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.706406107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29fb868b-020b-44aa-9903-02b255ba38be name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.706662056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f40ae879569d775a79f117bd872c259ff93223ced84a0e688802eab6411d8c7,PodSandboxId:3570749d9d79d684051602d352dba7823388fd99b5f9878334cbc6d894862f43,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722272519487084714,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-cz2bv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 773dada2-8958-460d-b4f8-53d9981e74ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2be7a527,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552417b813e20a7ce35e6fcea96e06ad05be03de1a0bd835a9bf6528b1b97ed0,PodSandboxId:14569a4d3adaa424921ec1367e09008c4a61d82b4d4420f28aa901a13496fca1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722272380866463815,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bba16d61-afc5-4c02-85a7-8e1181099d91,},Annotations:map[string]string{io.kubernet
es.container.hash: da8d9711,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:168151c1371e9807e41e98e386b805a9c081e1884f375f772acd59461fd1e4e1,PodSandboxId:ad0779037f7870e83416e7b3d9c156a46b69c43805d64302b606f8dd75df6fa3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722272311011477451,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e3ae3c83-a5c9-4ac7-8
e5e-89b7df19295c,},Annotations:map[string]string{io.kubernetes.container.hash: 9aacfe94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba0974d77ed6cae9390e642d6962087e3c80b050cf6bfa114fa6593bde64aee7,PodSandboxId:788563bf38e22d42ff70445a3e7dc5ca86356221a9c50ccbe097c161a570db36,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722272261121646806,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-fdwdm,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 377d84f1-430a-423a-8e08-3ffc0e083b56,},Annotations:map[string]string{io.kubernetes.container.hash: b69a97df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2599cbcd1abdc6363e42cd84b81942c2062fb81ff763fe53dd406df7addc2b42,PodSandboxId:3f828acf7d097497af16440cc6cd07ae40d2bc608a845a408412fdda28abb0c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722272233545936328,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb738aeb-40ec-47f1-9422-8c2a64cb1b38,},Annotations:map[string]string{io.kubernetes.container.hash: 5db4b699,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36bd6adcb73e3db92ab8716fd9db0d1c3a693fba74911af5ad6739dd21be75cb,PodSandboxId:a7d20bcb6427eaaa99e2a808005690bbce7f36eb60d7bb3b5b7689203d9e1cc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722272231701006614,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-chxlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13483151-7a93-4b7e-bc8a-a0df4c049a67,},Annotations:map[string]string{io.kubernetes.container.hash: 82a81137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a55b409fab4e27959260804b6052797f442f48d1411d6f5b444548fa1720f7d,PodSandboxId:8fbf004632d3aa84babf678ad0864bf270121c283fcfba5f4aa1a1c73f779ca9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722272229216967769,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6wcxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 508ba4dd-e6d5-438e-a66c-0188b555f367,},Annotations:map[string]string{io.kubernetes.container.hash: 4e980fe0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316657090f5cffcd0503b91d427543c65807dac517373699e629c60a47444356,PodSandboxId:a22c8ef147418ae3b7b41984990582f394249928eb39a2cb517dbe663f43fbb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd27
3badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722272208145849572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b3873e1eb7772d5b00a12b153cb28c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79298bbe1b2337e09867ac114b4875e36699d0eb1ba5b9725b468712ac570005,PodSandboxId:1a0140bf9d63825f6e047dd86d15a682e5e28597567507432eedafaa1e785527,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d
6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722272208173742099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2280a372007a9e99150f8ed8e7385ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654027072dc843ca69f02c3c234a1eae4c56b9d9447ece343121a56ed3166d37,PodSandboxId:bf49bf99b816bf5d51496123adba38127de234b880313dc7cbb09e625c7b0906,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,S
tate:CONTAINER_RUNNING,CreatedAt:1722272208088937677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b994742abe79d480c8f0ba290e51e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8aba7843,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e54d28c2754d2aa11d61943c6e860b690ffc023075a34001580b78070aae803,PodSandboxId:c7bea62da2e92e649f0efd165cccb1c468a357414fbcbbb361e1db55f1f4bdcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
272208090472348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58abe85a975931c71d2ced52e3a7744c,},Annotations:map[string]string{io.kubernetes.container.hash: 91dad007,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29fb868b-020b-44aa-9903-02b255ba38be name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.744463342Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=029984e2-24b8-414e-a951-f6bb473680f8 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.744531418Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=029984e2-24b8-414e-a951-f6bb473680f8 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.746500216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1281349f-42d5-473e-b5bf-33d1242e17d2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.747888409Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722272698747864109,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1281349f-42d5-473e-b5bf-33d1242e17d2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.748407577Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d48f3698-daef-48e8-a55b-5ea3c1767620 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.748458377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d48f3698-daef-48e8-a55b-5ea3c1767620 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.748739283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f40ae879569d775a79f117bd872c259ff93223ced84a0e688802eab6411d8c7,PodSandboxId:3570749d9d79d684051602d352dba7823388fd99b5f9878334cbc6d894862f43,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722272519487084714,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-cz2bv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 773dada2-8958-460d-b4f8-53d9981e74ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2be7a527,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552417b813e20a7ce35e6fcea96e06ad05be03de1a0bd835a9bf6528b1b97ed0,PodSandboxId:14569a4d3adaa424921ec1367e09008c4a61d82b4d4420f28aa901a13496fca1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722272380866463815,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bba16d61-afc5-4c02-85a7-8e1181099d91,},Annotations:map[string]string{io.kubernet
es.container.hash: da8d9711,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:168151c1371e9807e41e98e386b805a9c081e1884f375f772acd59461fd1e4e1,PodSandboxId:ad0779037f7870e83416e7b3d9c156a46b69c43805d64302b606f8dd75df6fa3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722272311011477451,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e3ae3c83-a5c9-4ac7-8
e5e-89b7df19295c,},Annotations:map[string]string{io.kubernetes.container.hash: 9aacfe94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba0974d77ed6cae9390e642d6962087e3c80b050cf6bfa114fa6593bde64aee7,PodSandboxId:788563bf38e22d42ff70445a3e7dc5ca86356221a9c50ccbe097c161a570db36,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722272261121646806,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-fdwdm,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 377d84f1-430a-423a-8e08-3ffc0e083b56,},Annotations:map[string]string{io.kubernetes.container.hash: b69a97df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2599cbcd1abdc6363e42cd84b81942c2062fb81ff763fe53dd406df7addc2b42,PodSandboxId:3f828acf7d097497af16440cc6cd07ae40d2bc608a845a408412fdda28abb0c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722272233545936328,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb738aeb-40ec-47f1-9422-8c2a64cb1b38,},Annotations:map[string]string{io.kubernetes.container.hash: 5db4b699,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36bd6adcb73e3db92ab8716fd9db0d1c3a693fba74911af5ad6739dd21be75cb,PodSandboxId:a7d20bcb6427eaaa99e2a808005690bbce7f36eb60d7bb3b5b7689203d9e1cc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722272231701006614,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-chxlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13483151-7a93-4b7e-bc8a-a0df4c049a67,},Annotations:map[string]string{io.kubernetes.container.hash: 82a81137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a55b409fab4e27959260804b6052797f442f48d1411d6f5b444548fa1720f7d,PodSandboxId:8fbf004632d3aa84babf678ad0864bf270121c283fcfba5f4aa1a1c73f779ca9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722272229216967769,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6wcxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 508ba4dd-e6d5-438e-a66c-0188b555f367,},Annotations:map[string]string{io.kubernetes.container.hash: 4e980fe0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316657090f5cffcd0503b91d427543c65807dac517373699e629c60a47444356,PodSandboxId:a22c8ef147418ae3b7b41984990582f394249928eb39a2cb517dbe663f43fbb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd27
3badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722272208145849572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b3873e1eb7772d5b00a12b153cb28c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79298bbe1b2337e09867ac114b4875e36699d0eb1ba5b9725b468712ac570005,PodSandboxId:1a0140bf9d63825f6e047dd86d15a682e5e28597567507432eedafaa1e785527,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d
6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722272208173742099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2280a372007a9e99150f8ed8e7385ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654027072dc843ca69f02c3c234a1eae4c56b9d9447ece343121a56ed3166d37,PodSandboxId:bf49bf99b816bf5d51496123adba38127de234b880313dc7cbb09e625c7b0906,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,S
tate:CONTAINER_RUNNING,CreatedAt:1722272208088937677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b994742abe79d480c8f0ba290e51e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8aba7843,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e54d28c2754d2aa11d61943c6e860b690ffc023075a34001580b78070aae803,PodSandboxId:c7bea62da2e92e649f0efd165cccb1c468a357414fbcbbb361e1db55f1f4bdcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
272208090472348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58abe85a975931c71d2ced52e3a7744c,},Annotations:map[string]string{io.kubernetes.container.hash: 91dad007,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d48f3698-daef-48e8-a55b-5ea3c1767620 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.785563348Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3b351f1-1628-4e0a-8da6-ec1732ffce15 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.785626169Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3b351f1-1628-4e0a-8da6-ec1732ffce15 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.787076064Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76683f57-b764-44cc-9c83-5502bc27e523 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.788378942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722272698788352085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76683f57-b764-44cc-9c83-5502bc27e523 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.789002921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=592d6224-7b53-44ec-91a0-6e6be8f24a38 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.789055024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=592d6224-7b53-44ec-91a0-6e6be8f24a38 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:04:58 addons-433102 crio[683]: time="2024-07-29 17:04:58.789366755Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f40ae879569d775a79f117bd872c259ff93223ced84a0e688802eab6411d8c7,PodSandboxId:3570749d9d79d684051602d352dba7823388fd99b5f9878334cbc6d894862f43,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722272519487084714,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-cz2bv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 773dada2-8958-460d-b4f8-53d9981e74ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2be7a527,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552417b813e20a7ce35e6fcea96e06ad05be03de1a0bd835a9bf6528b1b97ed0,PodSandboxId:14569a4d3adaa424921ec1367e09008c4a61d82b4d4420f28aa901a13496fca1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722272380866463815,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bba16d61-afc5-4c02-85a7-8e1181099d91,},Annotations:map[string]string{io.kubernet
es.container.hash: da8d9711,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:168151c1371e9807e41e98e386b805a9c081e1884f375f772acd59461fd1e4e1,PodSandboxId:ad0779037f7870e83416e7b3d9c156a46b69c43805d64302b606f8dd75df6fa3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722272311011477451,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e3ae3c83-a5c9-4ac7-8
e5e-89b7df19295c,},Annotations:map[string]string{io.kubernetes.container.hash: 9aacfe94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba0974d77ed6cae9390e642d6962087e3c80b050cf6bfa114fa6593bde64aee7,PodSandboxId:788563bf38e22d42ff70445a3e7dc5ca86356221a9c50ccbe097c161a570db36,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722272261121646806,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-fdwdm,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 377d84f1-430a-423a-8e08-3ffc0e083b56,},Annotations:map[string]string{io.kubernetes.container.hash: b69a97df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2599cbcd1abdc6363e42cd84b81942c2062fb81ff763fe53dd406df7addc2b42,PodSandboxId:3f828acf7d097497af16440cc6cd07ae40d2bc608a845a408412fdda28abb0c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722272233545936328,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb738aeb-40ec-47f1-9422-8c2a64cb1b38,},Annotations:map[string]string{io.kubernetes.container.hash: 5db4b699,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36bd6adcb73e3db92ab8716fd9db0d1c3a693fba74911af5ad6739dd21be75cb,PodSandboxId:a7d20bcb6427eaaa99e2a808005690bbce7f36eb60d7bb3b5b7689203d9e1cc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722272231701006614,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-chxlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13483151-7a93-4b7e-bc8a-a0df4c049a67,},Annotations:map[string]string{io.kubernetes.container.hash: 82a81137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a55b409fab4e27959260804b6052797f442f48d1411d6f5b444548fa1720f7d,PodSandboxId:8fbf004632d3aa84babf678ad0864bf270121c283fcfba5f4aa1a1c73f779ca9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722272229216967769,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6wcxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 508ba4dd-e6d5-438e-a66c-0188b555f367,},Annotations:map[string]string{io.kubernetes.container.hash: 4e980fe0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316657090f5cffcd0503b91d427543c65807dac517373699e629c60a47444356,PodSandboxId:a22c8ef147418ae3b7b41984990582f394249928eb39a2cb517dbe663f43fbb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd27
3badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722272208145849572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b3873e1eb7772d5b00a12b153cb28c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79298bbe1b2337e09867ac114b4875e36699d0eb1ba5b9725b468712ac570005,PodSandboxId:1a0140bf9d63825f6e047dd86d15a682e5e28597567507432eedafaa1e785527,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d
6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722272208173742099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2280a372007a9e99150f8ed8e7385ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654027072dc843ca69f02c3c234a1eae4c56b9d9447ece343121a56ed3166d37,PodSandboxId:bf49bf99b816bf5d51496123adba38127de234b880313dc7cbb09e625c7b0906,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,S
tate:CONTAINER_RUNNING,CreatedAt:1722272208088937677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b994742abe79d480c8f0ba290e51e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8aba7843,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e54d28c2754d2aa11d61943c6e860b690ffc023075a34001580b78070aae803,PodSandboxId:c7bea62da2e92e649f0efd165cccb1c468a357414fbcbbb361e1db55f1f4bdcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
272208090472348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-433102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58abe85a975931c71d2ced52e3a7744c,},Annotations:map[string]string{io.kubernetes.container.hash: 91dad007,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=592d6224-7b53-44ec-91a0-6e6be8f24a38 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1f40ae879569d       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   3570749d9d79d       hello-world-app-6778b5fc9f-cz2bv
	552417b813e20       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         5 minutes ago       Running             nginx                     0                   14569a4d3adaa       nginx
	168151c1371e9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   ad0779037f787       busybox
	ba0974d77ed6c       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   788563bf38e22       metrics-server-c59844bb4-fdwdm
	2599cbcd1abdc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   3f828acf7d097       storage-provisioner
	36bd6adcb73e3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   a7d20bcb6427e       coredns-7db6d8ff4d-chxlc
	2a55b409fab4e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        7 minutes ago       Running             kube-proxy                0                   8fbf004632d3a       kube-proxy-6wcxr
	79298bbe1b233       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        8 minutes ago       Running             kube-scheduler            0                   1a0140bf9d638       kube-scheduler-addons-433102
	316657090f5cf       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        8 minutes ago       Running             kube-controller-manager   0                   a22c8ef147418       kube-controller-manager-addons-433102
	4e54d28c2754d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        8 minutes ago       Running             kube-apiserver            0                   c7bea62da2e92       kube-apiserver-addons-433102
	654027072dc84       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   bf49bf99b816b       etcd-addons-433102
	
	
	==> coredns [36bd6adcb73e3db92ab8716fd9db0d1c3a693fba74911af5ad6739dd21be75cb] <==
	Trace[2142447518]: [30.000648548s] [30.000648548s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[577238642]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 16:57:12.762) (total time: 30012ms):
	Trace[577238642]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30012ms (16:57:42.774)
	Trace[577238642]: [30.012315349s] [30.012315349s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[155335818]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 16:57:12.773) (total time: 30001ms):
	Trace[155335818]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (16:57:42.774)
	Trace[155335818]: [30.00149739s] [30.00149739s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 10.244.0.22:37638 - 1625 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000171656s
	[INFO] 10.244.0.22:43559 - 62030 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000370461s
	[INFO] 10.244.0.22:49818 - 43448 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000092003s
	[INFO] 10.244.0.22:59059 - 54944 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000383853s
	[INFO] 10.244.0.22:48824 - 4755 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000093734s
	[INFO] 10.244.0.22:45350 - 10134 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000056265s
	[INFO] 10.244.0.22:35202 - 347 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001346528s
	[INFO] 10.244.0.22:60289 - 25412 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001438966s
	[INFO] 10.244.0.24:58940 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000586437s
	[INFO] 10.244.0.24:42852 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000155432s
	
	
	==> describe nodes <==
	Name:               addons-433102
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-433102
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=addons-433102
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T16_56_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-433102
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 16:56:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-433102
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:04:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:02:30 +0000   Mon, 29 Jul 2024 16:56:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:02:30 +0000   Mon, 29 Jul 2024 16:56:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:02:30 +0000   Mon, 29 Jul 2024 16:56:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:02:30 +0000   Mon, 29 Jul 2024 16:56:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    addons-433102
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac35226c0ae2487b829b216aeb471bfb
	  System UUID:                ac35226c-0ae2-487b-829b-216aeb471bfb
	  Boot ID:                    2cf79d73-3d23-4b77-9315-61b82db51e3e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  default                     hello-world-app-6778b5fc9f-cz2bv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 coredns-7db6d8ff4d-chxlc                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m52s
	  kube-system                 etcd-addons-433102                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m6s
	  kube-system                 kube-apiserver-addons-433102             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m6s
	  kube-system                 kube-controller-manager-addons-433102    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m6s
	  kube-system                 kube-proxy-6wcxr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m53s
	  kube-system                 kube-scheduler-addons-433102             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m6s
	  kube-system                 metrics-server-c59844bb4-fdwdm           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m47s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m48s  kube-proxy       
	  Normal  Starting                 8m6s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m6s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m6s   kubelet          Node addons-433102 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m6s   kubelet          Node addons-433102 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m6s   kubelet          Node addons-433102 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m5s   kubelet          Node addons-433102 status is now: NodeReady
	  Normal  RegisteredNode           7m53s  node-controller  Node addons-433102 event: Registered Node addons-433102 in Controller
	
	
	==> dmesg <==
	[  +0.142258] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.098887] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.120680] kauditd_printk_skb: 161 callbacks suppressed
	[  +5.853141] kauditd_printk_skb: 64 callbacks suppressed
	[  +7.822274] kauditd_printk_skb: 5 callbacks suppressed
	[ +10.788167] kauditd_printk_skb: 13 callbacks suppressed
	[ +12.129345] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.214772] kauditd_printk_skb: 12 callbacks suppressed
	[Jul29 16:58] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.542734] kauditd_printk_skb: 78 callbacks suppressed
	[ +14.450806] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.667528] kauditd_printk_skb: 57 callbacks suppressed
	[ +21.900388] kauditd_printk_skb: 6 callbacks suppressed
	[Jul29 16:59] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.078951] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.065817] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.508316] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.060702] kauditd_printk_skb: 21 callbacks suppressed
	[  +7.985343] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.875915] kauditd_printk_skb: 31 callbacks suppressed
	[  +9.159485] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.401675] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.243275] kauditd_printk_skb: 24 callbacks suppressed
	[Jul29 17:01] kauditd_printk_skb: 2 callbacks suppressed
	[Jul29 17:02] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [654027072dc843ca69f02c3c234a1eae4c56b9d9447ece343121a56ed3166d37] <==
	{"level":"info","ts":"2024-07-29T16:58:02.706714Z","caller":"traceutil/trace.go:171","msg":"trace[1916380899] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1022; }","duration":"333.674569ms","start":"2024-07-29T16:58:02.373029Z","end":"2024-07-29T16:58:02.706703Z","steps":["trace[1916380899] 'agreement among raft nodes before linearized reading'  (duration: 332.02728ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T16:58:02.70687Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T16:58:02.373016Z","time spent":"333.8383ms","remote":"127.0.0.1:39452","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14375,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-07-29T16:58:22.794159Z","caller":"traceutil/trace.go:171","msg":"trace[1000447906] transaction","detail":"{read_only:false; response_revision:1133; number_of_response:1; }","duration":"225.707144ms","start":"2024-07-29T16:58:22.568422Z","end":"2024-07-29T16:58:22.794129Z","steps":["trace[1000447906] 'process raft request'  (duration: 225.281432ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T16:58:23.039622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.803419ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-fdwdm.17e6bd7f99ef4ba8\" ","response":"range_response_count:1 size:813"}
	{"level":"warn","ts":"2024-07-29T16:58:23.039662Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.651373ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T16:58:23.039671Z","caller":"traceutil/trace.go:171","msg":"trace[1839948077] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-c59844bb4-fdwdm.17e6bd7f99ef4ba8; range_end:; response_count:1; response_revision:1133; }","duration":"108.890307ms","start":"2024-07-29T16:58:22.930768Z","end":"2024-07-29T16:58:23.039658Z","steps":["trace[1839948077] 'range keys from in-memory index tree'  (duration: 108.658222ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T16:58:23.039698Z","caller":"traceutil/trace.go:171","msg":"trace[2012143020] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1133; }","duration":"103.751544ms","start":"2024-07-29T16:58:22.935938Z","end":"2024-07-29T16:58:23.039689Z","steps":["trace[2012143020] 'range keys from in-memory index tree'  (duration: 103.612834ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T16:58:59.750646Z","caller":"traceutil/trace.go:171","msg":"trace[328429338] transaction","detail":"{read_only:false; response_revision:1335; number_of_response:1; }","duration":"141.153601ms","start":"2024-07-29T16:58:59.609468Z","end":"2024-07-29T16:58:59.750622Z","steps":["trace[328429338] 'process raft request'  (duration: 140.904059ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T16:59:22.179164Z","caller":"traceutil/trace.go:171","msg":"trace[1576085642] transaction","detail":"{read_only:false; response_revision:1495; number_of_response:1; }","duration":"316.163063ms","start":"2024-07-29T16:59:21.862945Z","end":"2024-07-29T16:59:22.179108Z","steps":["trace[1576085642] 'process raft request'  (duration: 316.053636ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T16:59:22.179259Z","caller":"traceutil/trace.go:171","msg":"trace[1362362908] linearizableReadLoop","detail":"{readStateIndex:1551; appliedIndex:1551; }","duration":"287.857329ms","start":"2024-07-29T16:59:21.891386Z","end":"2024-07-29T16:59:22.179244Z","steps":["trace[1362362908] 'read index received'  (duration: 287.851825ms)","trace[1362362908] 'applied index is now lower than readState.Index'  (duration: 4.601µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T16:59:22.179389Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"287.978344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-29T16:59:22.179415Z","caller":"traceutil/trace.go:171","msg":"trace[1724860483] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; response_count:0; response_revision:1495; }","duration":"288.050863ms","start":"2024-07-29T16:59:21.891357Z","end":"2024-07-29T16:59:22.179408Z","steps":["trace[1724860483] 'agreement among raft nodes before linearized reading'  (duration: 287.957902ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T16:59:22.179407Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T16:59:21.86293Z","time spent":"316.344989ms","remote":"127.0.0.1:39440","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1480 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-29T16:59:22.1853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.708537ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T16:59:22.185561Z","caller":"traceutil/trace.go:171","msg":"trace[49773128] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1496; }","duration":"275.698143ms","start":"2024-07-29T16:59:21.909766Z","end":"2024-07-29T16:59:22.185464Z","steps":["trace[49773128] 'agreement among raft nodes before linearized reading'  (duration: 273.698836ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T16:59:22.186751Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.584633ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:12134"}
	{"level":"info","ts":"2024-07-29T16:59:22.186884Z","caller":"traceutil/trace.go:171","msg":"trace[166255791] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1496; }","duration":"168.846033ms","start":"2024-07-29T16:59:22.018031Z","end":"2024-07-29T16:59:22.186877Z","steps":["trace[166255791] 'agreement among raft nodes before linearized reading'  (duration: 168.373744ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T16:59:22.187369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"251.845992ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T16:59:22.18748Z","caller":"traceutil/trace.go:171","msg":"trace[2086234027] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1496; }","duration":"251.979298ms","start":"2024-07-29T16:59:21.935492Z","end":"2024-07-29T16:59:22.187471Z","steps":["trace[2086234027] 'agreement among raft nodes before linearized reading'  (duration: 251.851669ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T16:59:40.770712Z","caller":"traceutil/trace.go:171","msg":"trace[2075011209] transaction","detail":"{read_only:false; response_revision:1670; number_of_response:1; }","duration":"290.688029ms","start":"2024-07-29T16:59:40.479988Z","end":"2024-07-29T16:59:40.770676Z","steps":["trace[2075011209] 'process raft request'  (duration: 290.284883ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:00:00.592864Z","caller":"traceutil/trace.go:171","msg":"trace[1762399] linearizableReadLoop","detail":"{readStateIndex:1933; appliedIndex:1932; }","duration":"138.127973ms","start":"2024-07-29T17:00:00.45466Z","end":"2024-07-29T17:00:00.592788Z","steps":["trace[1762399] 'read index received'  (duration: 137.993767ms)","trace[1762399] 'applied index is now lower than readState.Index'  (duration: 133.867µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T17:00:00.593032Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.34442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3607"}
	{"level":"info","ts":"2024-07-29T17:00:00.593085Z","caller":"traceutil/trace.go:171","msg":"trace[201807023] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1862; }","duration":"138.437973ms","start":"2024-07-29T17:00:00.454633Z","end":"2024-07-29T17:00:00.593071Z","steps":["trace[201807023] 'agreement among raft nodes before linearized reading'  (duration: 138.304913ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:00:00.593254Z","caller":"traceutil/trace.go:171","msg":"trace[1438204927] transaction","detail":"{read_only:false; response_revision:1862; number_of_response:1; }","duration":"206.312001ms","start":"2024-07-29T17:00:00.386927Z","end":"2024-07-29T17:00:00.593239Z","steps":["trace[1438204927] 'process raft request'  (duration: 205.771914ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T17:00:32.807108Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T17:00:32.460163Z","time spent":"346.935326ms","remote":"127.0.0.1:39264","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	
	
	==> kernel <==
	 17:04:59 up 8 min,  0 users,  load average: 0.15, 0.79, 0.63
	Linux addons-433102 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4e54d28c2754d2aa11d61943c6e860b690ffc023075a34001580b78070aae803] <==
	I0729 16:58:52.990163       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0729 16:58:52.998621       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0729 16:59:04.282262       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0729 16:59:05.356413       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0729 16:59:29.080019       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0729 16:59:38.136017       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0729 16:59:38.331400       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.113.173"}
	E0729 16:59:42.090755       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0729 16:59:49.706608       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 16:59:49.706895       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 16:59:49.743087       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 16:59:49.743164       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 16:59:49.768500       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 16:59:49.768627       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 16:59:49.797454       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 16:59:49.797528       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 16:59:49.862437       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 16:59:49.862497       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0729 16:59:50.769093       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0729 16:59:50.863740       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0729 16:59:50.879028       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0729 16:59:56.368463       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.115.170"}
	I0729 17:01:58.324089       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.139.192"}
	E0729 17:02:00.042077       1 watch.go:250] http2: stream closed
	E0729 17:02:00.746585       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [316657090f5cffcd0503b91d427543c65807dac517373699e629c60a47444356] <==
	W0729 17:02:47.816407       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:02:47.816481       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:02:53.455715       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:02:53.455913       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:03:03.655068       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:03:03.655194       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:03:14.754047       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:03:14.754112       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:03:31.936361       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:03:31.936447       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:03:42.865058       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:03:42.865092       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:03:45.366645       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:03:45.366755       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:03:58.552881       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:03:58.552936       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:04:10.310486       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:04:10.310629       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:04:20.439300       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:04:20.439361       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:04:31.045288       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:04:31.045339       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:04:31.533436       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:04:31.533538       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 17:04:57.811951       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="9.189µs"
	
	
	==> kube-proxy [2a55b409fab4e27959260804b6052797f442f48d1411d6f5b444548fa1720f7d] <==
	I0729 16:57:09.801024       1 server_linux.go:69] "Using iptables proxy"
	I0729 16:57:09.839484       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.73"]
	I0729 16:57:10.422467       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 16:57:10.422528       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 16:57:10.422546       1 server_linux.go:165] "Using iptables Proxier"
	I0729 16:57:10.622162       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 16:57:10.622360       1 server.go:872] "Version info" version="v1.30.3"
	I0729 16:57:10.622390       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 16:57:10.683896       1 config.go:192] "Starting service config controller"
	I0729 16:57:10.683944       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 16:57:10.683986       1 config.go:101] "Starting endpoint slice config controller"
	I0729 16:57:10.683990       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 16:57:10.684414       1 config.go:319] "Starting node config controller"
	I0729 16:57:10.684443       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 16:57:10.822005       1 shared_informer.go:320] Caches are synced for node config
	I0729 16:57:10.822295       1 shared_informer.go:320] Caches are synced for service config
	I0729 16:57:10.822317       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [79298bbe1b2337e09867ac114b4875e36699d0eb1ba5b9725b468712ac570005] <==
	W0729 16:56:50.943176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 16:56:50.945597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 16:56:50.943858       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 16:56:50.943979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 16:56:50.943995       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 16:56:50.944160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 16:56:51.789063       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 16:56:51.789188       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 16:56:51.794605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 16:56:51.794730       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 16:56:51.817477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 16:56:51.817755       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 16:56:51.845352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 16:56:51.845439       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 16:56:51.949743       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 16:56:51.949932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 16:56:51.961875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 16:56:51.961992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 16:56:51.966713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 16:56:51.966876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 16:56:51.976744       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 16:56:51.977282       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 16:56:52.066562       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 16:56:52.067652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0729 16:56:53.937937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 17:02:53 addons-433102 kubelet[1272]: E0729 17:02:53.371787    1272 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:02:53 addons-433102 kubelet[1272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:02:53 addons-433102 kubelet[1272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:02:53 addons-433102 kubelet[1272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:02:53 addons-433102 kubelet[1272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:02:54 addons-433102 kubelet[1272]: I0729 17:02:54.230424    1272 scope.go:117] "RemoveContainer" containerID="6f05ed99aac59f862e4fa68ec9a7cc203e689d314aef7acd1ce66000468e98f7"
	Jul 29 17:02:54 addons-433102 kubelet[1272]: I0729 17:02:54.245494    1272 scope.go:117] "RemoveContainer" containerID="0e3723967067fd8c4aff73427ece3579d27b86c225b1a8485d140c46bce1f89a"
	Jul 29 17:03:32 addons-433102 kubelet[1272]: I0729 17:03:32.324969    1272 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 17:03:53 addons-433102 kubelet[1272]: E0729 17:03:53.370765    1272 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:03:53 addons-433102 kubelet[1272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:03:53 addons-433102 kubelet[1272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:03:53 addons-433102 kubelet[1272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:03:53 addons-433102 kubelet[1272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:04:33 addons-433102 kubelet[1272]: I0729 17:04:33.324736    1272 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 17:04:53 addons-433102 kubelet[1272]: E0729 17:04:53.372316    1272 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:04:53 addons-433102 kubelet[1272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:04:53 addons-433102 kubelet[1272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:04:53 addons-433102 kubelet[1272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:04:53 addons-433102 kubelet[1272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:04:59 addons-433102 kubelet[1272]: I0729 17:04:59.220771    1272 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgt4r\" (UniqueName: \"kubernetes.io/projected/377d84f1-430a-423a-8e08-3ffc0e083b56-kube-api-access-dgt4r\") pod \"377d84f1-430a-423a-8e08-3ffc0e083b56\" (UID: \"377d84f1-430a-423a-8e08-3ffc0e083b56\") "
	Jul 29 17:04:59 addons-433102 kubelet[1272]: I0729 17:04:59.221998    1272 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/377d84f1-430a-423a-8e08-3ffc0e083b56-tmp-dir\") pod \"377d84f1-430a-423a-8e08-3ffc0e083b56\" (UID: \"377d84f1-430a-423a-8e08-3ffc0e083b56\") "
	Jul 29 17:04:59 addons-433102 kubelet[1272]: I0729 17:04:59.222398    1272 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/377d84f1-430a-423a-8e08-3ffc0e083b56-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "377d84f1-430a-423a-8e08-3ffc0e083b56" (UID: "377d84f1-430a-423a-8e08-3ffc0e083b56"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 29 17:04:59 addons-433102 kubelet[1272]: I0729 17:04:59.231112    1272 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/377d84f1-430a-423a-8e08-3ffc0e083b56-kube-api-access-dgt4r" (OuterVolumeSpecName: "kube-api-access-dgt4r") pod "377d84f1-430a-423a-8e08-3ffc0e083b56" (UID: "377d84f1-430a-423a-8e08-3ffc0e083b56"). InnerVolumeSpecName "kube-api-access-dgt4r". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 17:04:59 addons-433102 kubelet[1272]: I0729 17:04:59.323195    1272 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/377d84f1-430a-423a-8e08-3ffc0e083b56-tmp-dir\") on node \"addons-433102\" DevicePath \"\""
	Jul 29 17:04:59 addons-433102 kubelet[1272]: I0729 17:04:59.323222    1272 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dgt4r\" (UniqueName: \"kubernetes.io/projected/377d84f1-430a-423a-8e08-3ffc0e083b56-kube-api-access-dgt4r\") on node \"addons-433102\" DevicePath \"\""
	
	
	==> storage-provisioner [2599cbcd1abdc6363e42cd84b81942c2062fb81ff763fe53dd406df7addc2b42] <==
	I0729 16:57:14.026382       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 16:57:14.055161       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 16:57:14.055234       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 16:57:14.083407       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 16:57:14.083582       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-433102_5781e976-6e08-4c8c-9c60-c03e601c784d!
	I0729 16:57:14.091406       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"14133a61-5614-4788-b090-089c59317928", APIVersion:"v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-433102_5781e976-6e08-4c8c-9c60-c03e601c784d became leader
	I0729 16:57:14.184561       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-433102_5781e976-6e08-4c8c-9c60-c03e601c784d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-433102 -n addons-433102
helpers_test.go:261: (dbg) Run:  kubectl --context addons-433102 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (362.22s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-433102
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-433102: exit status 82 (2m0.453036653s)

                                                
                                                
-- stdout --
	* Stopping node "addons-433102"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-433102" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-433102
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-433102: exit status 11 (21.555417021s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-433102" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-433102
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-433102: exit status 11 (6.144276065s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-433102" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-433102
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-433102: exit status 11 (6.14305689s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-433102" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (242.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-419822 /tmp/TestFunctionalparallelMountCmdany-port4044759520/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722273113165445788" to /tmp/TestFunctionalparallelMountCmdany-port4044759520/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722273113165445788" to /tmp/TestFunctionalparallelMountCmdany-port4044759520/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722273113165445788" to /tmp/TestFunctionalparallelMountCmdany-port4044759520/001/test-1722273113165445788
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419822 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (252.572293ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 17:11 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 17:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 17:11 test-1722273113165445788
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh cat /mount-9p/test-1722273113165445788
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-419822 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a3ab0d45-8f69-4d63-8dca-65463a305693] Pending
helpers_test.go:344: "busybox-mount" [a3ab0d45-8f69-4d63-8dca-65463a305693] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:329: TestFunctional/parallel/MountCmd/any-port: WARNING: pod list for "default" "integration-test=busybox-mount" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_mount_test.go:153: ***** TestFunctional/parallel/MountCmd/any-port: pod "integration-test=busybox-mount" failed to start within 4m0s: context deadline exceeded ****
functional_test_mount_test.go:153: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-419822 -n functional-419822
functional_test_mount_test.go:153: TestFunctional/parallel/MountCmd/any-port: showing logs for failed pods as of 2024-07-29 17:15:55.037180396 +0000 UTC m=+1211.704548691
functional_test_mount_test.go:153: (dbg) Run:  kubectl --context functional-419822 describe po busybox-mount -n default
functional_test_mount_test.go:153: (dbg) kubectl --context functional-419822 describe po busybox-mount -n default:
Name:             busybox-mount
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-419822/192.168.39.26
Start Time:       Mon, 29 Jul 2024 17:11:54 +0000
Labels:           integration-test=busybox-mount
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
mount-munger:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hf6nm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   False 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
test-volume:
Type:          HostPath (bare host directory volume)
Path:          /mount-9p
HostPathType:  
kube-api-access-hf6nm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  4m1s  default-scheduler  Successfully assigned default/busybox-mount to functional-419822
functional_test_mount_test.go:153: (dbg) Run:  kubectl --context functional-419822 logs busybox-mount -n default
functional_test_mount_test.go:153: (dbg) Non-zero exit: kubectl --context functional-419822 logs busybox-mount -n default: exit status 1 (66.904006ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mount-munger" in pod "busybox-mount" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
functional_test_mount_test.go:153: kubectl --context functional-419822 logs busybox-mount -n default: exit status 1
functional_test_mount_test.go:154: failed waiting for busybox-mount pod: integration-test=busybox-mount within 4m0s: context deadline exceeded
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419822 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (185.388139ms)

                                                
                                                
-- stdout --
	192.168.39.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=1000,access=any,msize=65536,trans=tcp,noextend,port=35741)
	total 2
	-rw-r--r-- 1 docker docker 24 Jul 29 17:11 created-by-test
	-rw-r--r-- 1 docker docker 24 Jul 29 17:11 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Jul 29 17:11 test-1722273113165445788
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-419822 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419822 /tmp/TestFunctionalparallelMountCmdany-port4044759520/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-419822 /tmp/TestFunctionalparallelMountCmdany-port4044759520/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port4044759520/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.39.1:35741
* Userspace file server: ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port4044759520/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-419822 /tmp/TestFunctionalparallelMountCmdany-port4044759520/001:/mount-9p --alsologtostderr -v=1] stderr:
I0729 17:11:53.218939   26459 out.go:291] Setting OutFile to fd 1 ...
I0729 17:11:53.219095   26459 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:11:53.219106   26459 out.go:304] Setting ErrFile to fd 2...
I0729 17:11:53.219111   26459 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:11:53.219320   26459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
I0729 17:11:53.219536   26459 mustload.go:65] Loading cluster: functional-419822
I0729 17:11:53.219838   26459 config.go:182] Loaded profile config "functional-419822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:11:53.220190   26459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:11:53.220240   26459 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:11:53.234375   26459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35851
I0729 17:11:53.234853   26459 main.go:141] libmachine: () Calling .GetVersion
I0729 17:11:53.235497   26459 main.go:141] libmachine: Using API Version  1
I0729 17:11:53.235519   26459 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:11:53.236062   26459 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:11:53.236398   26459 main.go:141] libmachine: (functional-419822) Calling .GetState
I0729 17:11:53.237860   26459 host.go:66] Checking if "functional-419822" exists ...
I0729 17:11:53.238155   26459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:11:53.238178   26459 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:11:53.252521   26459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36095
I0729 17:11:53.252838   26459 main.go:141] libmachine: () Calling .GetVersion
I0729 17:11:53.253305   26459 main.go:141] libmachine: Using API Version  1
I0729 17:11:53.253334   26459 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:11:53.253616   26459 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:11:53.253798   26459 main.go:141] libmachine: (functional-419822) Calling .DriverName
I0729 17:11:53.253947   26459 main.go:141] libmachine: (functional-419822) Calling .DriverName
I0729 17:11:53.254094   26459 main.go:141] libmachine: (functional-419822) Calling .GetIP
I0729 17:11:53.257236   26459 main.go:141] libmachine: (functional-419822) DBG | domain functional-419822 has defined MAC address 52:54:00:af:4e:d9 in network mk-functional-419822
I0729 17:11:53.257674   26459 main.go:141] libmachine: (functional-419822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:4e:d9", ip: ""} in network mk-functional-419822: {Iface:virbr1 ExpiryTime:2024-07-29 18:08:42 +0000 UTC Type:0 Mac:52:54:00:af:4e:d9 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:functional-419822 Clientid:01:52:54:00:af:4e:d9}
I0729 17:11:53.257756   26459 main.go:141] libmachine: (functional-419822) DBG | domain functional-419822 has defined IP address 192.168.39.26 and MAC address 52:54:00:af:4e:d9 in network mk-functional-419822
I0729 17:11:53.258184   26459 main.go:141] libmachine: (functional-419822) Calling .DriverName
I0729 17:11:53.260623   26459 out.go:177] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port4044759520/001 into VM as /mount-9p ...
I0729 17:11:53.261881   26459 out.go:177]   - Mount type:   9p
I0729 17:11:53.263009   26459 out.go:177]   - User ID:      docker
I0729 17:11:53.264044   26459 out.go:177]   - Group ID:     docker
I0729 17:11:53.265116   26459 out.go:177]   - Version:      9p2000.L
I0729 17:11:53.266179   26459 out.go:177]   - Message Size: 262144
I0729 17:11:53.267205   26459 out.go:177]   - Options:      map[]
I0729 17:11:53.268232   26459 out.go:177]   - Bind Address: 192.168.39.1:35741
I0729 17:11:53.269420   26459 out.go:177] * Userspace file server: 
I0729 17:11:53.269543   26459 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0729 17:11:53.269611   26459 main.go:141] libmachine: (functional-419822) Calling .GetSSHHostname
I0729 17:11:53.272482   26459 main.go:141] libmachine: (functional-419822) DBG | domain functional-419822 has defined MAC address 52:54:00:af:4e:d9 in network mk-functional-419822
I0729 17:11:53.272933   26459 main.go:141] libmachine: (functional-419822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:4e:d9", ip: ""} in network mk-functional-419822: {Iface:virbr1 ExpiryTime:2024-07-29 18:08:42 +0000 UTC Type:0 Mac:52:54:00:af:4e:d9 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:functional-419822 Clientid:01:52:54:00:af:4e:d9}
I0729 17:11:53.272977   26459 main.go:141] libmachine: (functional-419822) DBG | domain functional-419822 has defined IP address 192.168.39.26 and MAC address 52:54:00:af:4e:d9 in network mk-functional-419822
I0729 17:11:53.273260   26459 main.go:141] libmachine: (functional-419822) Calling .GetSSHPort
I0729 17:11:53.273421   26459 main.go:141] libmachine: (functional-419822) Calling .GetSSHKeyPath
I0729 17:11:53.273558   26459 main.go:141] libmachine: (functional-419822) Calling .GetSSHUsername
I0729 17:11:53.273683   26459 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/functional-419822/id_rsa Username:docker}
I0729 17:11:53.412184   26459 mount.go:180] unmount for /mount-9p ran successfully
I0729 17:11:53.412214   26459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I0729 17:11:53.427757   26459 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=35741,trans=tcp,version=9p2000.L 192.168.39.1 /mount-9p"
I0729 17:11:53.479487   26459 main.go:125] stdlog: ufs.go:141 connected
I0729 17:11:53.479624   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tversion tag 65535 msize 65536 version '9P2000.L'
I0729 17:11:53.479664   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rversion tag 65535 msize 65536 version '9P2000'
I0729 17:11:53.480073   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I0729 17:11:53.480125   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rattach tag 0 aqid (20fa07a ff7a2447 'd')
I0729 17:11:53.481158   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 0
I0729 17:11:53.481283   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa07a ff7a2447 'd') m d775 at 0 mt 1722273113 l 4096 t 0 d 0 ext )
I0729 17:11:53.482163   26459 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/.mount-process: {Name:mk252a49f77491a22049489171dddbf1c8b6a036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0729 17:11:53.482342   26459 mount.go:105] mount successful: ""
I0729 17:11:53.484090   26459 out.go:177] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port4044759520/001 to /mount-9p
I0729 17:11:53.485438   26459 out.go:177] 
I0729 17:11:53.486598   26459 out.go:177] * NOTE: This process must stay alive for the mount to be accessible ...
I0729 17:11:54.356657   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 0
I0729 17:11:54.356842   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa07a ff7a2447 'd') m d775 at 0 mt 1722273113 l 4096 t 0 d 0 ext )
I0729 17:11:54.359627   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Twalk tag 0 fid 0 newfid 1 
I0729 17:11:54.359670   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rwalk tag 0 
I0729 17:11:54.359905   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Topen tag 0 fid 1 mode 0
I0729 17:11:54.359986   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Ropen tag 0 qid (20fa07a ff7a2447 'd') iounit 0
I0729 17:11:54.360210   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 0
I0729 17:11:54.360294   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa07a ff7a2447 'd') m d775 at 0 mt 1722273113 l 4096 t 0 d 0 ext )
I0729 17:11:54.360553   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tread tag 0 fid 1 offset 0 count 65512
I0729 17:11:54.360745   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rread tag 0 count 258
I0729 17:11:54.360907   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tread tag 0 fid 1 offset 258 count 65254
I0729 17:11:54.360951   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rread tag 0 count 0
I0729 17:11:54.361101   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tread tag 0 fid 1 offset 258 count 65512
I0729 17:11:54.361141   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rread tag 0 count 0
I0729 17:11:54.361272   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0729 17:11:54.361330   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rwalk tag 0 (20fa07e ff7a2447 '') 
I0729 17:11:54.361490   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 2
I0729 17:11:54.361622   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa07e ff7a2447 '') m 644 at 0 mt 1722273113 l 24 t 0 d 0 ext )
I0729 17:11:54.361773   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 2
I0729 17:11:54.361849   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa07e ff7a2447 '') m 644 at 0 mt 1722273113 l 24 t 0 d 0 ext )
I0729 17:11:54.362097   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tclunk tag 0 fid 2
I0729 17:11:54.362155   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rclunk tag 0
I0729 17:11:54.362346   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Twalk tag 0 fid 0 newfid 2 0:'test-1722273113165445788' 
I0729 17:11:54.362406   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rwalk tag 0 (20fa07f ff7a2447 '') 
I0729 17:11:54.362564   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 2
I0729 17:11:54.362631   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('test-1722273113165445788' 'jenkins' 'balintp' '' q (20fa07f ff7a2447 '') m 644 at 0 mt 1722273113 l 24 t 0 d 0 ext )
I0729 17:11:54.362808   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 2
I0729 17:11:54.362874   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('test-1722273113165445788' 'jenkins' 'balintp' '' q (20fa07f ff7a2447 '') m 644 at 0 mt 1722273113 l 24 t 0 d 0 ext )
I0729 17:11:54.363031   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tclunk tag 0 fid 2
I0729 17:11:54.363050   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rclunk tag 0
I0729 17:11:54.363183   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0729 17:11:54.363226   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rwalk tag 0 (20fa07d ff7a2447 '') 
I0729 17:11:54.363368   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 2
I0729 17:11:54.363468   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa07d ff7a2447 '') m 644 at 0 mt 1722273113 l 24 t 0 d 0 ext )
I0729 17:11:54.363648   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 2
I0729 17:11:54.363728   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa07d ff7a2447 '') m 644 at 0 mt 1722273113 l 24 t 0 d 0 ext )
I0729 17:11:54.364067   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tclunk tag 0 fid 2
I0729 17:11:54.364101   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rclunk tag 0
I0729 17:11:54.364274   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tread tag 0 fid 1 offset 258 count 65512
I0729 17:11:54.364306   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rread tag 0 count 0
I0729 17:11:54.364537   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tclunk tag 0 fid 1
I0729 17:11:54.364582   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rclunk tag 0
I0729 17:11:54.632577   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Twalk tag 0 fid 0 newfid 1 0:'test-1722273113165445788' 
I0729 17:11:54.632644   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rwalk tag 0 (20fa07f ff7a2447 '') 
I0729 17:11:54.633668   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 1
I0729 17:11:54.633815   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('test-1722273113165445788' 'jenkins' 'balintp' '' q (20fa07f ff7a2447 '') m 644 at 0 mt 1722273113 l 24 t 0 d 0 ext )
I0729 17:11:54.634080   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Twalk tag 0 fid 1 newfid 2 
I0729 17:11:54.634121   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rwalk tag 0 
I0729 17:11:54.634301   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Topen tag 0 fid 2 mode 0
I0729 17:11:54.634382   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Ropen tag 0 qid (20fa07f ff7a2447 '') iounit 0
I0729 17:11:54.634526   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 1
I0729 17:11:54.634625   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('test-1722273113165445788' 'jenkins' 'balintp' '' q (20fa07f ff7a2447 '') m 644 at 0 mt 1722273113 l 24 t 0 d 0 ext )
I0729 17:11:54.634816   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tread tag 0 fid 2 offset 0 count 65512
I0729 17:11:54.634882   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rread tag 0 count 24
I0729 17:11:54.635064   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tread tag 0 fid 2 offset 24 count 65512
I0729 17:11:54.635097   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rread tag 0 count 0
I0729 17:11:54.635275   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tread tag 0 fid 2 offset 24 count 65512
I0729 17:11:54.635324   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rread tag 0 count 0
I0729 17:11:54.635498   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tclunk tag 0 fid 2
I0729 17:11:54.635533   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rclunk tag 0
I0729 17:11:54.635700   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tclunk tag 0 fid 1
I0729 17:11:54.635732   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rclunk tag 0
I0729 17:15:55.339403   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 0
I0729 17:15:55.339534   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa07a ff7a2447 'd') m d775 at 0 mt 1722273113 l 4096 t 0 d 0 ext )
I0729 17:15:55.340843   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Twalk tag 0 fid 0 newfid 1 
I0729 17:15:55.340888   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rwalk tag 0 
I0729 17:15:55.341097   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Topen tag 0 fid 1 mode 0
I0729 17:15:55.341170   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Ropen tag 0 qid (20fa07a ff7a2447 'd') iounit 0
I0729 17:15:55.341355   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 0
I0729 17:15:55.341454   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa07a ff7a2447 'd') m d775 at 0 mt 1722273113 l 4096 t 0 d 0 ext )
I0729 17:15:55.341656   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tread tag 0 fid 1 offset 0 count 65512
I0729 17:15:55.341833   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rread tag 0 count 258
I0729 17:15:55.342021   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tread tag 0 fid 1 offset 258 count 65254
I0729 17:15:55.342048   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rread tag 0 count 0
I0729 17:15:55.342217   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tread tag 0 fid 1 offset 258 count 65512
I0729 17:15:55.342248   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rread tag 0 count 0
I0729 17:15:55.342418   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0729 17:15:55.342459   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rwalk tag 0 (20fa07e ff7a2447 '') 
I0729 17:15:55.342628   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 2
I0729 17:15:55.342711   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa07e ff7a2447 '') m 644 at 0 mt 1722273113 l 24 t 0 d 0 ext )
I0729 17:15:55.342874   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 2
I0729 17:15:55.342952   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa07e ff7a2447 '') m 644 at 0 mt 1722273113 l 24 t 0 d 0 ext )
I0729 17:15:55.343101   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tclunk tag 0 fid 2
I0729 17:15:55.343129   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rclunk tag 0
I0729 17:15:55.343360   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Twalk tag 0 fid 0 newfid 2 0:'test-1722273113165445788' 
I0729 17:15:55.343394   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rwalk tag 0 (20fa07f ff7a2447 '') 
I0729 17:15:55.343530   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 2
I0729 17:15:55.343619   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('test-1722273113165445788' 'jenkins' 'balintp' '' q (20fa07f ff7a2447 '') m 644 at 0 mt 1722273113 l 24 t 0 d 0 ext )
I0729 17:15:55.343885   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 2
I0729 17:15:55.343957   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('test-1722273113165445788' 'jenkins' 'balintp' '' q (20fa07f ff7a2447 '') m 644 at 0 mt 1722273113 l 24 t 0 d 0 ext )
I0729 17:15:55.344409   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tclunk tag 0 fid 2
I0729 17:15:55.344445   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rclunk tag 0
I0729 17:15:55.344668   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0729 17:15:55.344703   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rwalk tag 0 (20fa07d ff7a2447 '') 
I0729 17:15:55.345066   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 2
I0729 17:15:55.345158   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa07d ff7a2447 '') m 644 at 0 mt 1722273113 l 24 t 0 d 0 ext )
I0729 17:15:55.345381   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tstat tag 0 fid 2
I0729 17:15:55.345465   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa07d ff7a2447 '') m 644 at 0 mt 1722273113 l 24 t 0 d 0 ext )
I0729 17:15:55.345641   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tclunk tag 0 fid 2
I0729 17:15:55.345672   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rclunk tag 0
I0729 17:15:55.345841   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tread tag 0 fid 1 offset 258 count 65512
I0729 17:15:55.345871   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rread tag 0 count 0
I0729 17:15:55.346040   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tclunk tag 0 fid 1
I0729 17:15:55.346081   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rclunk tag 0
I0729 17:15:55.348419   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I0729 17:15:55.348462   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rerror tag 0 ename 'file not found' ecode 0
I0729 17:15:55.534819   26459 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.26:58590 Tclunk tag 0 fid 0
I0729 17:15:55.534863   26459 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.26:58590 Rclunk tag 0
I0729 17:15:55.535460   26459 main.go:125] stdlog: ufs.go:147 disconnected
I0729 17:15:55.758644   26459 out.go:177] * Unmounting /mount-9p ...
I0729 17:15:55.760043   26459 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0729 17:15:55.769186   26459 mount.go:180] unmount for /mount-9p ran successfully
I0729 17:15:55.769283   26459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/.mount-process: {Name:mk252a49f77491a22049489171dddbf1c8b6a036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0729 17:15:55.770993   26459 out.go:177] 
W0729 17:15:55.772155   26459 out.go:239] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I0729 17:15:55.773454   26459 out.go:177] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (242.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 node stop m02 -v=7 --alsologtostderr
E0729 17:21:52.902737   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:22:20.587990   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-900414 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.457541728s)

                                                
                                                
-- stdout --
	* Stopping node "ha-900414-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:20:45.108298   34259 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:20:45.108412   34259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:20:45.108420   34259 out.go:304] Setting ErrFile to fd 2...
	I0729 17:20:45.108424   34259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:20:45.108583   34259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:20:45.108805   34259 mustload.go:65] Loading cluster: ha-900414
	I0729 17:20:45.109148   34259 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:20:45.109166   34259 stop.go:39] StopHost: ha-900414-m02
	I0729 17:20:45.109496   34259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:20:45.109544   34259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:20:45.125161   34259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44571
	I0729 17:20:45.125685   34259 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:20:45.126249   34259 main.go:141] libmachine: Using API Version  1
	I0729 17:20:45.126269   34259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:20:45.126607   34259 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:20:45.128954   34259 out.go:177] * Stopping node "ha-900414-m02"  ...
	I0729 17:20:45.130190   34259 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 17:20:45.130231   34259 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:20:45.130465   34259 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 17:20:45.130494   34259 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:20:45.133291   34259 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:20:45.133725   34259 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:20:45.133763   34259 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:20:45.133879   34259 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:20:45.134081   34259 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:20:45.134243   34259 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:20:45.134407   34259 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa Username:docker}
	I0729 17:20:45.219164   34259 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 17:20:45.273530   34259 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 17:20:45.328580   34259 main.go:141] libmachine: Stopping "ha-900414-m02"...
	I0729 17:20:45.328608   34259 main.go:141] libmachine: (ha-900414-m02) Calling .GetState
	I0729 17:20:45.330127   34259 main.go:141] libmachine: (ha-900414-m02) Calling .Stop
	I0729 17:20:45.333480   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 0/120
	I0729 17:20:46.335235   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 1/120
	I0729 17:20:47.336696   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 2/120
	I0729 17:20:48.338048   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 3/120
	I0729 17:20:49.339385   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 4/120
	I0729 17:20:50.341217   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 5/120
	I0729 17:20:51.342581   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 6/120
	I0729 17:20:52.344610   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 7/120
	I0729 17:20:53.345915   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 8/120
	I0729 17:20:54.347142   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 9/120
	I0729 17:20:55.348694   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 10/120
	I0729 17:20:56.350043   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 11/120
	I0729 17:20:57.351878   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 12/120
	I0729 17:20:58.353039   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 13/120
	I0729 17:20:59.354542   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 14/120
	I0729 17:21:00.356491   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 15/120
	I0729 17:21:01.357961   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 16/120
	I0729 17:21:02.359519   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 17/120
	I0729 17:21:03.360923   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 18/120
	I0729 17:21:04.362484   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 19/120
	I0729 17:21:05.363613   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 20/120
	I0729 17:21:06.365156   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 21/120
	I0729 17:21:07.366589   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 22/120
	I0729 17:21:08.368907   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 23/120
	I0729 17:21:09.370109   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 24/120
	I0729 17:21:10.371891   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 25/120
	I0729 17:21:11.373415   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 26/120
	I0729 17:21:12.374854   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 27/120
	I0729 17:21:13.376054   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 28/120
	I0729 17:21:14.377347   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 29/120
	I0729 17:21:15.379392   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 30/120
	I0729 17:21:16.380823   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 31/120
	I0729 17:21:17.382151   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 32/120
	I0729 17:21:18.383512   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 33/120
	I0729 17:21:19.384849   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 34/120
	I0729 17:21:20.386798   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 35/120
	I0729 17:21:21.388876   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 36/120
	I0729 17:21:22.390126   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 37/120
	I0729 17:21:23.391729   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 38/120
	I0729 17:21:24.393045   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 39/120
	I0729 17:21:25.394989   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 40/120
	I0729 17:21:26.397267   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 41/120
	I0729 17:21:27.398981   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 42/120
	I0729 17:21:28.400959   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 43/120
	I0729 17:21:29.402212   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 44/120
	I0729 17:21:30.404077   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 45/120
	I0729 17:21:31.405361   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 46/120
	I0729 17:21:32.407199   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 47/120
	I0729 17:21:33.408863   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 48/120
	I0729 17:21:34.410855   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 49/120
	I0729 17:21:35.412829   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 50/120
	I0729 17:21:36.414403   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 51/120
	I0729 17:21:37.415748   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 52/120
	I0729 17:21:38.417100   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 53/120
	I0729 17:21:39.419270   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 54/120
	I0729 17:21:40.421105   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 55/120
	I0729 17:21:41.422590   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 56/120
	I0729 17:21:42.424932   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 57/120
	I0729 17:21:43.427133   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 58/120
	I0729 17:21:44.429346   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 59/120
	I0729 17:21:45.431911   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 60/120
	I0729 17:21:46.433286   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 61/120
	I0729 17:21:47.435770   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 62/120
	I0729 17:21:48.437075   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 63/120
	I0729 17:21:49.439085   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 64/120
	I0729 17:21:50.441091   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 65/120
	I0729 17:21:51.442559   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 66/120
	I0729 17:21:52.443955   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 67/120
	I0729 17:21:53.445874   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 68/120
	I0729 17:21:54.447298   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 69/120
	I0729 17:21:55.449175   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 70/120
	I0729 17:21:56.451011   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 71/120
	I0729 17:21:57.452410   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 72/120
	I0729 17:21:58.453729   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 73/120
	I0729 17:21:59.455133   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 74/120
	I0729 17:22:00.456850   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 75/120
	I0729 17:22:01.458386   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 76/120
	I0729 17:22:02.459761   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 77/120
	I0729 17:22:03.461415   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 78/120
	I0729 17:22:04.463079   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 79/120
	I0729 17:22:05.464692   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 80/120
	I0729 17:22:06.466109   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 81/120
	I0729 17:22:07.467379   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 82/120
	I0729 17:22:08.468997   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 83/120
	I0729 17:22:09.471149   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 84/120
	I0729 17:22:10.472777   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 85/120
	I0729 17:22:11.473876   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 86/120
	I0729 17:22:12.475188   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 87/120
	I0729 17:22:13.476605   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 88/120
	I0729 17:22:14.477777   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 89/120
	I0729 17:22:15.479737   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 90/120
	I0729 17:22:16.481035   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 91/120
	I0729 17:22:17.482469   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 92/120
	I0729 17:22:18.483936   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 93/120
	I0729 17:22:19.485262   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 94/120
	I0729 17:22:20.486560   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 95/120
	I0729 17:22:21.487640   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 96/120
	I0729 17:22:22.488874   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 97/120
	I0729 17:22:23.490673   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 98/120
	I0729 17:22:24.492073   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 99/120
	I0729 17:22:25.493442   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 100/120
	I0729 17:22:26.494868   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 101/120
	I0729 17:22:27.496868   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 102/120
	I0729 17:22:28.498175   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 103/120
	I0729 17:22:29.499511   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 104/120
	I0729 17:22:30.501204   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 105/120
	I0729 17:22:31.502574   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 106/120
	I0729 17:22:32.504799   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 107/120
	I0729 17:22:33.506202   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 108/120
	I0729 17:22:34.507681   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 109/120
	I0729 17:22:35.509699   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 110/120
	I0729 17:22:36.510987   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 111/120
	I0729 17:22:37.513185   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 112/120
	I0729 17:22:38.514757   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 113/120
	I0729 17:22:39.516091   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 114/120
	I0729 17:22:40.517761   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 115/120
	I0729 17:22:41.519504   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 116/120
	I0729 17:22:42.521107   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 117/120
	I0729 17:22:43.522628   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 118/120
	I0729 17:22:44.524021   34259 main.go:141] libmachine: (ha-900414-m02) Waiting for machine to stop 119/120
	I0729 17:22:45.525462   34259 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 17:22:45.525588   34259 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-900414 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr: exit status 3 (19.148287125s)

                                                
                                                
-- stdout --
	ha-900414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-900414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:22:45.566523   34708 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:22:45.566625   34708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:22:45.566634   34708 out.go:304] Setting ErrFile to fd 2...
	I0729 17:22:45.566638   34708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:22:45.566836   34708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:22:45.566997   34708 out.go:298] Setting JSON to false
	I0729 17:22:45.567019   34708 mustload.go:65] Loading cluster: ha-900414
	I0729 17:22:45.567064   34708 notify.go:220] Checking for updates...
	I0729 17:22:45.567346   34708 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:22:45.567360   34708 status.go:255] checking status of ha-900414 ...
	I0729 17:22:45.567717   34708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:22:45.567780   34708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:22:45.587185   34708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0729 17:22:45.587559   34708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:22:45.588055   34708 main.go:141] libmachine: Using API Version  1
	I0729 17:22:45.588103   34708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:22:45.588443   34708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:22:45.588621   34708 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:22:45.590079   34708 status.go:330] ha-900414 host status = "Running" (err=<nil>)
	I0729 17:22:45.590096   34708 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:22:45.590509   34708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:22:45.590552   34708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:22:45.604752   34708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38071
	I0729 17:22:45.605142   34708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:22:45.605559   34708 main.go:141] libmachine: Using API Version  1
	I0729 17:22:45.605591   34708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:22:45.605871   34708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:22:45.606031   34708 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:22:45.608858   34708 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:22:45.609281   34708 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:22:45.609306   34708 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:22:45.609495   34708 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:22:45.609854   34708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:22:45.609892   34708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:22:45.625279   34708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37973
	I0729 17:22:45.625619   34708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:22:45.626049   34708 main.go:141] libmachine: Using API Version  1
	I0729 17:22:45.626063   34708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:22:45.626397   34708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:22:45.626574   34708 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:22:45.626771   34708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:22:45.626797   34708 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:22:45.629227   34708 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:22:45.629598   34708 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:22:45.629627   34708 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:22:45.629757   34708 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:22:45.630026   34708 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:22:45.630174   34708 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:22:45.630332   34708 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:22:45.715652   34708 ssh_runner.go:195] Run: systemctl --version
	I0729 17:22:45.722955   34708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:22:45.742716   34708 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:22:45.742744   34708 api_server.go:166] Checking apiserver status ...
	I0729 17:22:45.742780   34708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:22:45.761763   34708 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup
	W0729 17:22:45.774339   34708 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:22:45.774400   34708 ssh_runner.go:195] Run: ls
	I0729 17:22:45.779060   34708 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:22:45.783159   34708 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:22:45.783180   34708 status.go:422] ha-900414 apiserver status = Running (err=<nil>)
	I0729 17:22:45.783193   34708 status.go:257] ha-900414 status: &{Name:ha-900414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:22:45.783220   34708 status.go:255] checking status of ha-900414-m02 ...
	I0729 17:22:45.783506   34708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:22:45.783544   34708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:22:45.797855   34708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I0729 17:22:45.798250   34708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:22:45.798682   34708 main.go:141] libmachine: Using API Version  1
	I0729 17:22:45.798700   34708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:22:45.799010   34708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:22:45.799179   34708 main.go:141] libmachine: (ha-900414-m02) Calling .GetState
	I0729 17:22:45.800584   34708 status.go:330] ha-900414-m02 host status = "Running" (err=<nil>)
	I0729 17:22:45.800599   34708 host.go:66] Checking if "ha-900414-m02" exists ...
	I0729 17:22:45.800930   34708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:22:45.800972   34708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:22:45.814732   34708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39131
	I0729 17:22:45.815081   34708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:22:45.815511   34708 main.go:141] libmachine: Using API Version  1
	I0729 17:22:45.815533   34708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:22:45.815802   34708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:22:45.815997   34708 main.go:141] libmachine: (ha-900414-m02) Calling .GetIP
	I0729 17:22:45.818618   34708 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:22:45.818959   34708 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:22:45.818992   34708 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:22:45.819085   34708 host.go:66] Checking if "ha-900414-m02" exists ...
	I0729 17:22:45.819348   34708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:22:45.819377   34708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:22:45.833863   34708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I0729 17:22:45.834168   34708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:22:45.834654   34708 main.go:141] libmachine: Using API Version  1
	I0729 17:22:45.834679   34708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:22:45.835020   34708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:22:45.835211   34708 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:22:45.835369   34708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:22:45.835385   34708 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:22:45.837879   34708 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:22:45.838272   34708 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:22:45.838293   34708 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:22:45.838427   34708 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:22:45.838588   34708 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:22:45.838733   34708 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:22:45.838871   34708 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa Username:docker}
	W0729 17:23:04.306600   34708 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	W0729 17:23:04.306706   34708 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	E0729 17:23:04.306730   34708 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0729 17:23:04.306739   34708 status.go:257] ha-900414-m02 status: &{Name:ha-900414-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 17:23:04.306778   34708 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0729 17:23:04.306792   34708 status.go:255] checking status of ha-900414-m03 ...
	I0729 17:23:04.307237   34708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:04.307285   34708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:04.322656   34708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I0729 17:23:04.323160   34708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:04.323629   34708 main.go:141] libmachine: Using API Version  1
	I0729 17:23:04.323650   34708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:04.324001   34708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:04.324369   34708 main.go:141] libmachine: (ha-900414-m03) Calling .GetState
	I0729 17:23:04.326108   34708 status.go:330] ha-900414-m03 host status = "Running" (err=<nil>)
	I0729 17:23:04.326126   34708 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:23:04.326548   34708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:04.326597   34708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:04.342300   34708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34561
	I0729 17:23:04.342819   34708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:04.343258   34708 main.go:141] libmachine: Using API Version  1
	I0729 17:23:04.343281   34708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:04.343600   34708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:04.343796   34708 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:23:04.346685   34708 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:04.347128   34708 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:23:04.347149   34708 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:04.347282   34708 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:23:04.347588   34708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:04.347621   34708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:04.362651   34708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38057
	I0729 17:23:04.363027   34708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:04.363492   34708 main.go:141] libmachine: Using API Version  1
	I0729 17:23:04.363518   34708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:04.363856   34708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:04.364050   34708 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:23:04.364253   34708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:04.364275   34708 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:23:04.367492   34708 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:04.367973   34708 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:23:04.368001   34708 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:04.368165   34708 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:23:04.368338   34708 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:23:04.368497   34708 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:23:04.368663   34708 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:23:04.451750   34708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:04.472156   34708 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:23:04.472185   34708 api_server.go:166] Checking apiserver status ...
	I0729 17:23:04.472220   34708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:23:04.490303   34708 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup
	W0729 17:23:04.500934   34708 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:23:04.500982   34708 ssh_runner.go:195] Run: ls
	I0729 17:23:04.505339   34708 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:23:04.509903   34708 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:23:04.509921   34708 status.go:422] ha-900414-m03 apiserver status = Running (err=<nil>)
	I0729 17:23:04.509930   34708 status.go:257] ha-900414-m03 status: &{Name:ha-900414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:04.509947   34708 status.go:255] checking status of ha-900414-m04 ...
	I0729 17:23:04.510258   34708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:04.510295   34708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:04.525538   34708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46469
	I0729 17:23:04.525950   34708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:04.526443   34708 main.go:141] libmachine: Using API Version  1
	I0729 17:23:04.526464   34708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:04.526764   34708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:04.526952   34708 main.go:141] libmachine: (ha-900414-m04) Calling .GetState
	I0729 17:23:04.528541   34708 status.go:330] ha-900414-m04 host status = "Running" (err=<nil>)
	I0729 17:23:04.528557   34708 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:23:04.528880   34708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:04.528917   34708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:04.543616   34708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44195
	I0729 17:23:04.544019   34708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:04.544451   34708 main.go:141] libmachine: Using API Version  1
	I0729 17:23:04.544472   34708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:04.544781   34708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:04.544979   34708 main.go:141] libmachine: (ha-900414-m04) Calling .GetIP
	I0729 17:23:04.547484   34708 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:04.547889   34708 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:23:04.547922   34708 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:04.548064   34708 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:23:04.548349   34708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:04.548380   34708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:04.563779   34708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43491
	I0729 17:23:04.564099   34708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:04.564611   34708 main.go:141] libmachine: Using API Version  1
	I0729 17:23:04.564634   34708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:04.564932   34708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:04.565149   34708 main.go:141] libmachine: (ha-900414-m04) Calling .DriverName
	I0729 17:23:04.565331   34708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:04.565347   34708 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHHostname
	I0729 17:23:04.568154   34708 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:04.568536   34708 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:23:04.568573   34708 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:04.568710   34708 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHPort
	I0729 17:23:04.568856   34708 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHKeyPath
	I0729 17:23:04.568994   34708 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHUsername
	I0729 17:23:04.569224   34708 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m04/id_rsa Username:docker}
	I0729 17:23:04.655496   34708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:04.673009   34708 status.go:257] ha-900414-m04 status: &{Name:ha-900414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-900414 -n ha-900414
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-900414 logs -n 25: (1.404026671s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-900414 cp ha-900414-m03:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3654370545/001/cp-test_ha-900414-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m03:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414:/home/docker/cp-test_ha-900414-m03_ha-900414.txt                       |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414 sudo cat                                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m03_ha-900414.txt                                 |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m03:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m02:/home/docker/cp-test_ha-900414-m03_ha-900414-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414-m02 sudo cat                                          | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m03_ha-900414-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m03:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04:/home/docker/cp-test_ha-900414-m03_ha-900414-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414-m04 sudo cat                                          | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m03_ha-900414-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-900414 cp testdata/cp-test.txt                                                | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3654370545/001/cp-test_ha-900414-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414:/home/docker/cp-test_ha-900414-m04_ha-900414.txt                       |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414 sudo cat                                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m04_ha-900414.txt                                 |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m02:/home/docker/cp-test_ha-900414-m04_ha-900414-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414-m02 sudo cat                                          | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m04_ha-900414-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m03:/home/docker/cp-test_ha-900414-m04_ha-900414-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414-m03 sudo cat                                          | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m04_ha-900414-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-900414 node stop m02 -v=7                                                     | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 17:15:59
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 17:15:59.676568   29751 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:15:59.676958   29751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:15:59.676978   29751 out.go:304] Setting ErrFile to fd 2...
	I0729 17:15:59.676987   29751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:15:59.677510   29751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:15:59.678388   29751 out.go:298] Setting JSON to false
	I0729 17:15:59.679421   29751 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3512,"bootTime":1722269848,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:15:59.679474   29751 start.go:139] virtualization: kvm guest
	I0729 17:15:59.681222   29751 out.go:177] * [ha-900414] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 17:15:59.682710   29751 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 17:15:59.682718   29751 notify.go:220] Checking for updates...
	I0729 17:15:59.684026   29751 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:15:59.685288   29751 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 17:15:59.686510   29751 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:15:59.687630   29751 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 17:15:59.688655   29751 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:15:59.689882   29751 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:15:59.724621   29751 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 17:15:59.725696   29751 start.go:297] selected driver: kvm2
	I0729 17:15:59.725706   29751 start.go:901] validating driver "kvm2" against <nil>
	I0729 17:15:59.725715   29751 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:15:59.726404   29751 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:15:59.726470   29751 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 17:15:59.741438   29751 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 17:15:59.741474   29751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:15:59.741694   29751 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:15:59.741750   29751 cni.go:84] Creating CNI manager for ""
	I0729 17:15:59.741761   29751 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 17:15:59.741767   29751 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 17:15:59.741821   29751 start.go:340] cluster config:
	{Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0729 17:15:59.741914   29751 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:15:59.743591   29751 out.go:177] * Starting "ha-900414" primary control-plane node in "ha-900414" cluster
	I0729 17:15:59.744900   29751 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:15:59.744955   29751 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 17:15:59.744968   29751 cache.go:56] Caching tarball of preloaded images
	I0729 17:15:59.745055   29751 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:15:59.745068   29751 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:15:59.745332   29751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:15:59.745352   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json: {Name:mk6b9bd4ecd2940fba0f12ae60de6d6e9b718e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:15:59.745485   29751 start.go:360] acquireMachinesLock for ha-900414: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:15:59.745525   29751 start.go:364] duration metric: took 28.47µs to acquireMachinesLock for "ha-900414"
	I0729 17:15:59.745543   29751 start.go:93] Provisioning new machine with config: &{Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:15:59.745597   29751 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 17:15:59.747636   29751 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:15:59.747748   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:15:59.747779   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:15:59.762097   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42391
	I0729 17:15:59.762484   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:15:59.762945   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:15:59.762965   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:15:59.763285   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:15:59.763435   29751 main.go:141] libmachine: (ha-900414) Calling .GetMachineName
	I0729 17:15:59.763582   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:15:59.763718   29751 start.go:159] libmachine.API.Create for "ha-900414" (driver="kvm2")
	I0729 17:15:59.763740   29751 client.go:168] LocalClient.Create starting
	I0729 17:15:59.763769   29751 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem
	I0729 17:15:59.763803   29751 main.go:141] libmachine: Decoding PEM data...
	I0729 17:15:59.763818   29751 main.go:141] libmachine: Parsing certificate...
	I0729 17:15:59.763871   29751 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem
	I0729 17:15:59.763889   29751 main.go:141] libmachine: Decoding PEM data...
	I0729 17:15:59.763908   29751 main.go:141] libmachine: Parsing certificate...
	I0729 17:15:59.763931   29751 main.go:141] libmachine: Running pre-create checks...
	I0729 17:15:59.763939   29751 main.go:141] libmachine: (ha-900414) Calling .PreCreateCheck
	I0729 17:15:59.764279   29751 main.go:141] libmachine: (ha-900414) Calling .GetConfigRaw
	I0729 17:15:59.764582   29751 main.go:141] libmachine: Creating machine...
	I0729 17:15:59.764593   29751 main.go:141] libmachine: (ha-900414) Calling .Create
	I0729 17:15:59.764698   29751 main.go:141] libmachine: (ha-900414) Creating KVM machine...
	I0729 17:15:59.765861   29751 main.go:141] libmachine: (ha-900414) DBG | found existing default KVM network
	I0729 17:15:59.766644   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:15:59.766505   29790 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0729 17:15:59.766664   29751 main.go:141] libmachine: (ha-900414) DBG | created network xml: 
	I0729 17:15:59.766678   29751 main.go:141] libmachine: (ha-900414) DBG | <network>
	I0729 17:15:59.766693   29751 main.go:141] libmachine: (ha-900414) DBG |   <name>mk-ha-900414</name>
	I0729 17:15:59.766704   29751 main.go:141] libmachine: (ha-900414) DBG |   <dns enable='no'/>
	I0729 17:15:59.766714   29751 main.go:141] libmachine: (ha-900414) DBG |   
	I0729 17:15:59.766726   29751 main.go:141] libmachine: (ha-900414) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 17:15:59.766736   29751 main.go:141] libmachine: (ha-900414) DBG |     <dhcp>
	I0729 17:15:59.766760   29751 main.go:141] libmachine: (ha-900414) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 17:15:59.766776   29751 main.go:141] libmachine: (ha-900414) DBG |     </dhcp>
	I0729 17:15:59.766786   29751 main.go:141] libmachine: (ha-900414) DBG |   </ip>
	I0729 17:15:59.766800   29751 main.go:141] libmachine: (ha-900414) DBG |   
	I0729 17:15:59.766812   29751 main.go:141] libmachine: (ha-900414) DBG | </network>
	I0729 17:15:59.766821   29751 main.go:141] libmachine: (ha-900414) DBG | 
	I0729 17:15:59.771617   29751 main.go:141] libmachine: (ha-900414) DBG | trying to create private KVM network mk-ha-900414 192.168.39.0/24...
	I0729 17:15:59.836965   29751 main.go:141] libmachine: (ha-900414) DBG | private KVM network mk-ha-900414 192.168.39.0/24 created
	I0729 17:15:59.836997   29751 main.go:141] libmachine: (ha-900414) Setting up store path in /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414 ...
	I0729 17:15:59.837010   29751 main.go:141] libmachine: (ha-900414) Building disk image from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 17:15:59.837021   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:15:59.836933   29790 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:15:59.837167   29751 main.go:141] libmachine: (ha-900414) Downloading /home/jenkins/minikube-integration/19345-11206/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 17:16:00.074746   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:00.074622   29790 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa...
	I0729 17:16:00.313510   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:00.313359   29790 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/ha-900414.rawdisk...
	I0729 17:16:00.313549   29751 main.go:141] libmachine: (ha-900414) DBG | Writing magic tar header
	I0729 17:16:00.313564   29751 main.go:141] libmachine: (ha-900414) DBG | Writing SSH key tar header
	I0729 17:16:00.313577   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:00.313507   29790 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414 ...
	I0729 17:16:00.313661   29751 main.go:141] libmachine: (ha-900414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414
	I0729 17:16:00.313679   29751 main.go:141] libmachine: (ha-900414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines
	I0729 17:16:00.313690   29751 main.go:141] libmachine: (ha-900414) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414 (perms=drwx------)
	I0729 17:16:00.313699   29751 main.go:141] libmachine: (ha-900414) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines (perms=drwxr-xr-x)
	I0729 17:16:00.313705   29751 main.go:141] libmachine: (ha-900414) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube (perms=drwxr-xr-x)
	I0729 17:16:00.313712   29751 main.go:141] libmachine: (ha-900414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:16:00.313722   29751 main.go:141] libmachine: (ha-900414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206
	I0729 17:16:00.313728   29751 main.go:141] libmachine: (ha-900414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 17:16:00.313735   29751 main.go:141] libmachine: (ha-900414) DBG | Checking permissions on dir: /home/jenkins
	I0729 17:16:00.313742   29751 main.go:141] libmachine: (ha-900414) DBG | Checking permissions on dir: /home
	I0729 17:16:00.313750   29751 main.go:141] libmachine: (ha-900414) DBG | Skipping /home - not owner
	I0729 17:16:00.313760   29751 main.go:141] libmachine: (ha-900414) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206 (perms=drwxrwxr-x)
	I0729 17:16:00.313797   29751 main.go:141] libmachine: (ha-900414) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 17:16:00.313816   29751 main.go:141] libmachine: (ha-900414) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 17:16:00.313838   29751 main.go:141] libmachine: (ha-900414) Creating domain...
	I0729 17:16:00.314925   29751 main.go:141] libmachine: (ha-900414) define libvirt domain using xml: 
	I0729 17:16:00.314946   29751 main.go:141] libmachine: (ha-900414) <domain type='kvm'>
	I0729 17:16:00.314956   29751 main.go:141] libmachine: (ha-900414)   <name>ha-900414</name>
	I0729 17:16:00.314968   29751 main.go:141] libmachine: (ha-900414)   <memory unit='MiB'>2200</memory>
	I0729 17:16:00.314979   29751 main.go:141] libmachine: (ha-900414)   <vcpu>2</vcpu>
	I0729 17:16:00.314984   29751 main.go:141] libmachine: (ha-900414)   <features>
	I0729 17:16:00.314989   29751 main.go:141] libmachine: (ha-900414)     <acpi/>
	I0729 17:16:00.314993   29751 main.go:141] libmachine: (ha-900414)     <apic/>
	I0729 17:16:00.315020   29751 main.go:141] libmachine: (ha-900414)     <pae/>
	I0729 17:16:00.315048   29751 main.go:141] libmachine: (ha-900414)     
	I0729 17:16:00.315058   29751 main.go:141] libmachine: (ha-900414)   </features>
	I0729 17:16:00.315063   29751 main.go:141] libmachine: (ha-900414)   <cpu mode='host-passthrough'>
	I0729 17:16:00.315068   29751 main.go:141] libmachine: (ha-900414)   
	I0729 17:16:00.315071   29751 main.go:141] libmachine: (ha-900414)   </cpu>
	I0729 17:16:00.315076   29751 main.go:141] libmachine: (ha-900414)   <os>
	I0729 17:16:00.315081   29751 main.go:141] libmachine: (ha-900414)     <type>hvm</type>
	I0729 17:16:00.315086   29751 main.go:141] libmachine: (ha-900414)     <boot dev='cdrom'/>
	I0729 17:16:00.315092   29751 main.go:141] libmachine: (ha-900414)     <boot dev='hd'/>
	I0729 17:16:00.315097   29751 main.go:141] libmachine: (ha-900414)     <bootmenu enable='no'/>
	I0729 17:16:00.315104   29751 main.go:141] libmachine: (ha-900414)   </os>
	I0729 17:16:00.315109   29751 main.go:141] libmachine: (ha-900414)   <devices>
	I0729 17:16:00.315116   29751 main.go:141] libmachine: (ha-900414)     <disk type='file' device='cdrom'>
	I0729 17:16:00.315123   29751 main.go:141] libmachine: (ha-900414)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/boot2docker.iso'/>
	I0729 17:16:00.315134   29751 main.go:141] libmachine: (ha-900414)       <target dev='hdc' bus='scsi'/>
	I0729 17:16:00.315157   29751 main.go:141] libmachine: (ha-900414)       <readonly/>
	I0729 17:16:00.315176   29751 main.go:141] libmachine: (ha-900414)     </disk>
	I0729 17:16:00.315189   29751 main.go:141] libmachine: (ha-900414)     <disk type='file' device='disk'>
	I0729 17:16:00.315201   29751 main.go:141] libmachine: (ha-900414)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 17:16:00.315218   29751 main.go:141] libmachine: (ha-900414)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/ha-900414.rawdisk'/>
	I0729 17:16:00.315230   29751 main.go:141] libmachine: (ha-900414)       <target dev='hda' bus='virtio'/>
	I0729 17:16:00.315241   29751 main.go:141] libmachine: (ha-900414)     </disk>
	I0729 17:16:00.315254   29751 main.go:141] libmachine: (ha-900414)     <interface type='network'>
	I0729 17:16:00.315268   29751 main.go:141] libmachine: (ha-900414)       <source network='mk-ha-900414'/>
	I0729 17:16:00.315279   29751 main.go:141] libmachine: (ha-900414)       <model type='virtio'/>
	I0729 17:16:00.315288   29751 main.go:141] libmachine: (ha-900414)     </interface>
	I0729 17:16:00.315299   29751 main.go:141] libmachine: (ha-900414)     <interface type='network'>
	I0729 17:16:00.315308   29751 main.go:141] libmachine: (ha-900414)       <source network='default'/>
	I0729 17:16:00.315318   29751 main.go:141] libmachine: (ha-900414)       <model type='virtio'/>
	I0729 17:16:00.315329   29751 main.go:141] libmachine: (ha-900414)     </interface>
	I0729 17:16:00.315345   29751 main.go:141] libmachine: (ha-900414)     <serial type='pty'>
	I0729 17:16:00.315358   29751 main.go:141] libmachine: (ha-900414)       <target port='0'/>
	I0729 17:16:00.315369   29751 main.go:141] libmachine: (ha-900414)     </serial>
	I0729 17:16:00.315380   29751 main.go:141] libmachine: (ha-900414)     <console type='pty'>
	I0729 17:16:00.315391   29751 main.go:141] libmachine: (ha-900414)       <target type='serial' port='0'/>
	I0729 17:16:00.315415   29751 main.go:141] libmachine: (ha-900414)     </console>
	I0729 17:16:00.315429   29751 main.go:141] libmachine: (ha-900414)     <rng model='virtio'>
	I0729 17:16:00.315443   29751 main.go:141] libmachine: (ha-900414)       <backend model='random'>/dev/random</backend>
	I0729 17:16:00.315453   29751 main.go:141] libmachine: (ha-900414)     </rng>
	I0729 17:16:00.315461   29751 main.go:141] libmachine: (ha-900414)     
	I0729 17:16:00.315470   29751 main.go:141] libmachine: (ha-900414)     
	I0729 17:16:00.315478   29751 main.go:141] libmachine: (ha-900414)   </devices>
	I0729 17:16:00.315487   29751 main.go:141] libmachine: (ha-900414) </domain>
	I0729 17:16:00.315496   29751 main.go:141] libmachine: (ha-900414) 
	I0729 17:16:00.319670   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:06:36:96 in network default
	I0729 17:16:00.320139   29751 main.go:141] libmachine: (ha-900414) Ensuring networks are active...
	I0729 17:16:00.320154   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:00.320747   29751 main.go:141] libmachine: (ha-900414) Ensuring network default is active
	I0729 17:16:00.320974   29751 main.go:141] libmachine: (ha-900414) Ensuring network mk-ha-900414 is active
	I0729 17:16:00.321597   29751 main.go:141] libmachine: (ha-900414) Getting domain xml...
	I0729 17:16:00.322398   29751 main.go:141] libmachine: (ha-900414) Creating domain...
	I0729 17:16:01.503985   29751 main.go:141] libmachine: (ha-900414) Waiting to get IP...
	I0729 17:16:01.504837   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:01.505292   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:01.505334   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:01.505277   29790 retry.go:31] will retry after 223.456895ms: waiting for machine to come up
	I0729 17:16:01.730850   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:01.731334   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:01.731360   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:01.731285   29790 retry.go:31] will retry after 358.601967ms: waiting for machine to come up
	I0729 17:16:02.092010   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:02.092531   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:02.092557   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:02.092480   29790 retry.go:31] will retry after 326.470702ms: waiting for machine to come up
	I0729 17:16:02.420941   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:02.421342   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:02.421367   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:02.421293   29790 retry.go:31] will retry after 592.274293ms: waiting for machine to come up
	I0729 17:16:03.014934   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:03.015310   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:03.015334   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:03.015269   29790 retry.go:31] will retry after 565.688093ms: waiting for machine to come up
	I0729 17:16:03.583027   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:03.583564   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:03.583589   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:03.583528   29790 retry.go:31] will retry after 638.104329ms: waiting for machine to come up
	I0729 17:16:04.223289   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:04.223682   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:04.223720   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:04.223653   29790 retry.go:31] will retry after 945.413379ms: waiting for machine to come up
	I0729 17:16:05.170448   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:05.170854   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:05.170879   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:05.170791   29790 retry.go:31] will retry after 1.059633806s: waiting for machine to come up
	I0729 17:16:06.232013   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:06.232499   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:06.232527   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:06.232449   29790 retry.go:31] will retry after 1.16821857s: waiting for machine to come up
	I0729 17:16:07.402715   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:07.403242   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:07.403271   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:07.403184   29790 retry.go:31] will retry after 1.541797905s: waiting for machine to come up
	I0729 17:16:08.947064   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:08.947472   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:08.947493   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:08.947452   29790 retry.go:31] will retry after 2.188109829s: waiting for machine to come up
	I0729 17:16:11.137679   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:11.138142   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:11.138169   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:11.138086   29790 retry.go:31] will retry after 3.499780988s: waiting for machine to come up
	I0729 17:16:14.641759   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:14.642210   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:14.642231   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:14.642166   29790 retry.go:31] will retry after 4.332731547s: waiting for machine to come up
	I0729 17:16:18.980304   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:18.980832   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:18.980864   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:18.980767   29790 retry.go:31] will retry after 5.360938119s: waiting for machine to come up
	I0729 17:16:24.343363   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.343835   29751 main.go:141] libmachine: (ha-900414) Found IP for machine: 192.168.39.114
	I0729 17:16:24.343874   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has current primary IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.343888   29751 main.go:141] libmachine: (ha-900414) Reserving static IP address...
	I0729 17:16:24.344201   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find host DHCP lease matching {name: "ha-900414", mac: "52:54:00:5a:29:8d", ip: "192.168.39.114"} in network mk-ha-900414
	I0729 17:16:24.414982   29751 main.go:141] libmachine: (ha-900414) DBG | Getting to WaitForSSH function...
	I0729 17:16:24.415004   29751 main.go:141] libmachine: (ha-900414) Reserved static IP address: 192.168.39.114
	I0729 17:16:24.415019   29751 main.go:141] libmachine: (ha-900414) Waiting for SSH to be available...
	I0729 17:16:24.417039   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.417427   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:24.417455   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.417543   29751 main.go:141] libmachine: (ha-900414) DBG | Using SSH client type: external
	I0729 17:16:24.417595   29751 main.go:141] libmachine: (ha-900414) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa (-rw-------)
	I0729 17:16:24.417626   29751 main.go:141] libmachine: (ha-900414) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:16:24.417645   29751 main.go:141] libmachine: (ha-900414) DBG | About to run SSH command:
	I0729 17:16:24.417662   29751 main.go:141] libmachine: (ha-900414) DBG | exit 0
	I0729 17:16:24.542583   29751 main.go:141] libmachine: (ha-900414) DBG | SSH cmd err, output: <nil>: 
	I0729 17:16:24.542918   29751 main.go:141] libmachine: (ha-900414) KVM machine creation complete!
	I0729 17:16:24.543406   29751 main.go:141] libmachine: (ha-900414) Calling .GetConfigRaw
	I0729 17:16:24.543927   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:24.544157   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:24.544367   29751 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 17:16:24.544384   29751 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:16:24.545826   29751 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 17:16:24.545841   29751 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 17:16:24.545848   29751 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 17:16:24.545858   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:24.548387   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.548744   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:24.548768   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.548886   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:24.549058   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:24.549180   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:24.549292   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:24.549415   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:16:24.549590   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:16:24.549602   29751 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 17:16:24.653629   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:16:24.653650   29751 main.go:141] libmachine: Detecting the provisioner...
	I0729 17:16:24.653657   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:24.656346   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.656670   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:24.656706   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.656830   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:24.657006   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:24.657165   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:24.657322   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:24.657470   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:16:24.657670   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:16:24.657682   29751 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 17:16:24.763340   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 17:16:24.763408   29751 main.go:141] libmachine: found compatible host: buildroot
	I0729 17:16:24.763416   29751 main.go:141] libmachine: Provisioning with buildroot...
	I0729 17:16:24.763423   29751 main.go:141] libmachine: (ha-900414) Calling .GetMachineName
	I0729 17:16:24.763667   29751 buildroot.go:166] provisioning hostname "ha-900414"
	I0729 17:16:24.763693   29751 main.go:141] libmachine: (ha-900414) Calling .GetMachineName
	I0729 17:16:24.763895   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:24.766542   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.766942   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:24.766967   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.767150   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:24.767284   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:24.767472   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:24.767680   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:24.767869   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:16:24.768029   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:16:24.768041   29751 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-900414 && echo "ha-900414" | sudo tee /etc/hostname
	I0729 17:16:24.888774   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-900414
	
	I0729 17:16:24.888799   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:24.891638   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.892040   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:24.892070   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.892197   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:24.892383   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:24.892543   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:24.892676   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:24.892839   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:16:24.893044   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:16:24.893066   29751 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-900414' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-900414/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-900414' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:16:25.007667   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:16:25.007698   29751 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 17:16:25.007739   29751 buildroot.go:174] setting up certificates
	I0729 17:16:25.007751   29751 provision.go:84] configureAuth start
	I0729 17:16:25.007761   29751 main.go:141] libmachine: (ha-900414) Calling .GetMachineName
	I0729 17:16:25.008042   29751 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:16:25.010704   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.011044   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.011078   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.011192   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:25.013536   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.013812   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.013836   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.013995   29751 provision.go:143] copyHostCerts
	I0729 17:16:25.014024   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:16:25.014058   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 17:16:25.014068   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:16:25.014130   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 17:16:25.014217   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:16:25.014235   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 17:16:25.014239   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:16:25.014263   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 17:16:25.014316   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:16:25.014333   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 17:16:25.014339   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:16:25.014374   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 17:16:25.014445   29751 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.ha-900414 san=[127.0.0.1 192.168.39.114 ha-900414 localhost minikube]
	I0729 17:16:25.088399   29751 provision.go:177] copyRemoteCerts
	I0729 17:16:25.088468   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:16:25.088495   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:25.091613   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.091999   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.092027   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.092220   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:25.092394   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:25.092608   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:25.092748   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:16:25.176099   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:16:25.176191   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 17:16:25.200204   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:16:25.200283   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:16:25.223234   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:16:25.223304   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 17:16:25.246533   29751 provision.go:87] duration metric: took 238.768709ms to configureAuth
	I0729 17:16:25.246560   29751 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:16:25.246752   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:16:25.246830   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:25.249458   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.249805   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.249822   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.249988   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:25.250165   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:25.250342   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:25.250491   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:25.250643   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:16:25.250843   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:16:25.250874   29751 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:16:25.519886   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:16:25.519916   29751 main.go:141] libmachine: Checking connection to Docker...
	I0729 17:16:25.519925   29751 main.go:141] libmachine: (ha-900414) Calling .GetURL
	I0729 17:16:25.521139   29751 main.go:141] libmachine: (ha-900414) DBG | Using libvirt version 6000000
	I0729 17:16:25.523401   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.523788   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.523814   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.524023   29751 main.go:141] libmachine: Docker is up and running!
	I0729 17:16:25.524040   29751 main.go:141] libmachine: Reticulating splines...
	I0729 17:16:25.524047   29751 client.go:171] duration metric: took 25.760297654s to LocalClient.Create
	I0729 17:16:25.524069   29751 start.go:167] duration metric: took 25.760350985s to libmachine.API.Create "ha-900414"
	I0729 17:16:25.524077   29751 start.go:293] postStartSetup for "ha-900414" (driver="kvm2")
	I0729 17:16:25.524086   29751 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:16:25.524100   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:25.524350   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:16:25.524370   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:25.526667   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.526989   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.527013   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.527208   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:25.527371   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:25.527499   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:25.527638   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:16:25.608806   29751 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:16:25.613178   29751 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:16:25.613197   29751 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 17:16:25.613251   29751 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 17:16:25.613340   29751 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 17:16:25.613355   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /etc/ssl/certs/183932.pem
	I0729 17:16:25.613474   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:16:25.622665   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:16:25.646959   29751 start.go:296] duration metric: took 122.870417ms for postStartSetup
	I0729 17:16:25.647002   29751 main.go:141] libmachine: (ha-900414) Calling .GetConfigRaw
	I0729 17:16:25.647614   29751 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:16:25.650408   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.650713   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.650735   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.650966   29751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:16:25.651158   29751 start.go:128] duration metric: took 25.90555269s to createHost
	I0729 17:16:25.651180   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:25.653612   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.653961   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.653982   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.654123   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:25.654303   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:25.654488   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:25.654626   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:25.654780   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:16:25.654955   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:16:25.654975   29751 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:16:25.763249   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722273385.741222344
	
	I0729 17:16:25.763271   29751 fix.go:216] guest clock: 1722273385.741222344
	I0729 17:16:25.763286   29751 fix.go:229] Guest: 2024-07-29 17:16:25.741222344 +0000 UTC Remote: 2024-07-29 17:16:25.651169706 +0000 UTC m=+26.007429590 (delta=90.052638ms)
	I0729 17:16:25.763306   29751 fix.go:200] guest clock delta is within tolerance: 90.052638ms
	I0729 17:16:25.763311   29751 start.go:83] releasing machines lock for "ha-900414", held for 26.01777943s
	I0729 17:16:25.763328   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:25.763585   29751 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:16:25.766107   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.766581   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.766609   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.766733   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:25.767155   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:25.767309   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:25.767396   29751 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:16:25.767429   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:25.767498   29751 ssh_runner.go:195] Run: cat /version.json
	I0729 17:16:25.767514   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:25.770326   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.770535   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.770764   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.770790   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.770973   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:25.770985   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.771011   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.771170   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:25.771193   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:25.771292   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:25.771355   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:25.771422   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:25.771483   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:16:25.771571   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:16:25.884230   29751 ssh_runner.go:195] Run: systemctl --version
	I0729 17:16:25.890429   29751 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:16:26.046533   29751 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:16:26.052249   29751 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:16:26.052301   29751 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:16:26.069130   29751 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 17:16:26.069147   29751 start.go:495] detecting cgroup driver to use...
	I0729 17:16:26.069208   29751 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:16:26.086635   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:16:26.100857   29751 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:16:26.100909   29751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:16:26.114412   29751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:16:26.131217   29751 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:16:26.260546   29751 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:16:26.409176   29751 docker.go:233] disabling docker service ...
	I0729 17:16:26.409245   29751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:16:26.423523   29751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:16:26.436099   29751 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:16:26.577524   29751 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:16:26.703925   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:16:26.717445   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:16:26.735004   29751 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:16:26.735048   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:16:26.745757   29751 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:16:26.745827   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:16:26.756432   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:16:26.766881   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:16:26.777521   29751 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:16:26.788302   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:16:26.799436   29751 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:16:26.819106   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:16:26.829194   29751 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:16:26.838407   29751 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 17:16:26.838466   29751 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 17:16:26.851462   29751 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:16:26.861215   29751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:16:26.985901   29751 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:16:27.125514   29751 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:16:27.125590   29751 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:16:27.130374   29751 start.go:563] Will wait 60s for crictl version
	I0729 17:16:27.130422   29751 ssh_runner.go:195] Run: which crictl
	I0729 17:16:27.134213   29751 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:16:27.172216   29751 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:16:27.172305   29751 ssh_runner.go:195] Run: crio --version
	I0729 17:16:27.199795   29751 ssh_runner.go:195] Run: crio --version
	I0729 17:16:27.229912   29751 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:16:27.231310   29751 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:16:27.234180   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:27.234609   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:27.234642   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:27.234789   29751 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:16:27.239065   29751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:16:27.252230   29751 kubeadm.go:883] updating cluster {Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 17:16:27.252330   29751 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:16:27.252386   29751 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:16:27.284998   29751 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 17:16:27.285145   29751 ssh_runner.go:195] Run: which lz4
	I0729 17:16:27.289201   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 17:16:27.289299   29751 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 17:16:27.293655   29751 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 17:16:27.293681   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 17:16:28.663963   29751 crio.go:462] duration metric: took 1.374697458s to copy over tarball
	I0729 17:16:28.664026   29751 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 17:16:30.851721   29751 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.187668412s)
	I0729 17:16:30.851741   29751 crio.go:469] duration metric: took 2.18775491s to extract the tarball
	I0729 17:16:30.851748   29751 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 17:16:30.889486   29751 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:16:30.935348   29751 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 17:16:30.935372   29751 cache_images.go:84] Images are preloaded, skipping loading
	I0729 17:16:30.935381   29751 kubeadm.go:934] updating node { 192.168.39.114 8443 v1.30.3 crio true true} ...
	I0729 17:16:30.935517   29751 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-900414 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:16:30.935601   29751 ssh_runner.go:195] Run: crio config
	I0729 17:16:30.979532   29751 cni.go:84] Creating CNI manager for ""
	I0729 17:16:30.979553   29751 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 17:16:30.979563   29751 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 17:16:30.979581   29751 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-900414 NodeName:ha-900414 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 17:16:30.979732   29751 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-900414"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 17:16:30.979759   29751 kube-vip.go:115] generating kube-vip config ...
	I0729 17:16:30.979803   29751 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 17:16:30.998345   29751 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 17:16:30.998464   29751 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 17:16:30.998526   29751 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:16:31.009025   29751 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 17:16:31.009094   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 17:16:31.019681   29751 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 17:16:31.036876   29751 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:16:31.054074   29751 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 17:16:31.070322   29751 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 17:16:31.086267   29751 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 17:16:31.089926   29751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:16:31.102733   29751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:16:31.225836   29751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:16:31.242958   29751 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414 for IP: 192.168.39.114
	I0729 17:16:31.242977   29751 certs.go:194] generating shared ca certs ...
	I0729 17:16:31.242991   29751 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:31.243144   29751 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 17:16:31.243191   29751 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 17:16:31.243200   29751 certs.go:256] generating profile certs ...
	I0729 17:16:31.243259   29751 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key
	I0729 17:16:31.243273   29751 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.crt with IP's: []
	I0729 17:16:31.374501   29751 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.crt ...
	I0729 17:16:31.374531   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.crt: {Name:mkb7b43c2afb7f6dbf658b43148a8f3bb44cbc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:31.374700   29751 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key ...
	I0729 17:16:31.374709   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key: {Name:mkb05bbb91e12e97873bf109d01e2f6483e49b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:31.374785   29751 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.b5031bbd
	I0729 17:16:31.374800   29751 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.b5031bbd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.254]
	I0729 17:16:31.695954   29751 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.b5031bbd ...
	I0729 17:16:31.695982   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.b5031bbd: {Name:mkbb6153a90029f4010f08b3c029806b5b14b049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:31.696158   29751 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.b5031bbd ...
	I0729 17:16:31.696172   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.b5031bbd: {Name:mk445d3afe4dca68bf414d39ecebb58f1ab9a59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:31.696266   29751 certs.go:381] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.b5031bbd -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt
	I0729 17:16:31.696364   29751 certs.go:385] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.b5031bbd -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key
	I0729 17:16:31.696440   29751 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key
	I0729 17:16:31.696460   29751 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt with IP's: []
	I0729 17:16:31.758432   29751 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt ...
	I0729 17:16:31.758456   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt: {Name:mkaefbda7a5c157d6370f92a63212228c1be898d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:31.758609   29751 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key ...
	I0729 17:16:31.758621   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key: {Name:mk83611fe6757acef0f970b5a2af1c987798c2d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:31.758707   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 17:16:31.758724   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 17:16:31.758738   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 17:16:31.758758   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 17:16:31.758776   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 17:16:31.758791   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 17:16:31.758804   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 17:16:31.758817   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 17:16:31.758888   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 17:16:31.758930   29751 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 17:16:31.758944   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:16:31.758975   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:16:31.759004   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:16:31.759035   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 17:16:31.759086   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:16:31.759140   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:16:31.759176   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem -> /usr/share/ca-certificates/18393.pem
	I0729 17:16:31.759196   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /usr/share/ca-certificates/183932.pem
	I0729 17:16:31.759703   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:16:31.785428   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:16:31.809264   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:16:31.832249   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:16:31.855181   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 17:16:31.878759   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 17:16:31.901923   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:16:31.924393   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:16:31.947254   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:16:31.970819   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 17:16:31.997211   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 17:16:32.038094   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 17:16:32.061962   29751 ssh_runner.go:195] Run: openssl version
	I0729 17:16:32.068624   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:16:32.080215   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:16:32.084892   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:16:32.084946   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:16:32.090981   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:16:32.102031   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 17:16:32.113730   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 17:16:32.118688   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 17:16:32.118746   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 17:16:32.125152   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 17:16:32.136583   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 17:16:32.147701   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 17:16:32.152181   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 17:16:32.152225   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 17:16:32.158013   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:16:32.168641   29751 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:16:32.172560   29751 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 17:16:32.172615   29751 kubeadm.go:392] StartCluster: {Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:16:32.172698   29751 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 17:16:32.172754   29751 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 17:16:32.209294   29751 cri.go:89] found id: ""
	I0729 17:16:32.209355   29751 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 17:16:32.219469   29751 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 17:16:32.228986   29751 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 17:16:32.239333   29751 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 17:16:32.239351   29751 kubeadm.go:157] found existing configuration files:
	
	I0729 17:16:32.239413   29751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 17:16:32.248360   29751 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 17:16:32.248414   29751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 17:16:32.257941   29751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 17:16:32.267109   29751 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 17:16:32.267167   29751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 17:16:32.276856   29751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 17:16:32.286060   29751 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 17:16:32.286119   29751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 17:16:32.295868   29751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 17:16:32.305171   29751 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 17:16:32.305232   29751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 17:16:32.315037   29751 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 17:16:32.556657   29751 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 17:16:44.366168   29751 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 17:16:44.366224   29751 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 17:16:44.366300   29751 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 17:16:44.366449   29751 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 17:16:44.366579   29751 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 17:16:44.366675   29751 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 17:16:44.368308   29751 out.go:204]   - Generating certificates and keys ...
	I0729 17:16:44.368393   29751 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 17:16:44.368480   29751 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 17:16:44.368585   29751 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 17:16:44.368661   29751 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 17:16:44.368739   29751 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 17:16:44.368807   29751 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 17:16:44.368884   29751 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 17:16:44.369040   29751 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-900414 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I0729 17:16:44.369119   29751 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 17:16:44.369252   29751 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-900414 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I0729 17:16:44.369338   29751 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 17:16:44.369419   29751 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 17:16:44.369458   29751 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 17:16:44.369506   29751 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 17:16:44.369566   29751 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 17:16:44.369663   29751 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 17:16:44.369767   29751 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 17:16:44.369830   29751 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 17:16:44.369900   29751 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 17:16:44.370025   29751 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 17:16:44.370127   29751 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 17:16:44.372294   29751 out.go:204]   - Booting up control plane ...
	I0729 17:16:44.372393   29751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 17:16:44.372472   29751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 17:16:44.372549   29751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 17:16:44.372637   29751 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 17:16:44.372730   29751 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 17:16:44.372773   29751 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 17:16:44.372924   29751 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 17:16:44.372990   29751 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 17:16:44.373039   29751 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.405714ms
	I0729 17:16:44.373102   29751 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 17:16:44.373176   29751 kubeadm.go:310] [api-check] The API server is healthy after 6.044431111s
	I0729 17:16:44.373284   29751 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 17:16:44.373401   29751 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 17:16:44.373450   29751 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 17:16:44.373695   29751 kubeadm.go:310] [mark-control-plane] Marking the node ha-900414 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 17:16:44.373746   29751 kubeadm.go:310] [bootstrap-token] Using token: ccbc6e.3vl1qmuqbu37bz1a
	I0729 17:16:44.375013   29751 out.go:204]   - Configuring RBAC rules ...
	I0729 17:16:44.375101   29751 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 17:16:44.375181   29751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 17:16:44.375300   29751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 17:16:44.375405   29751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 17:16:44.375507   29751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 17:16:44.375609   29751 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 17:16:44.375739   29751 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 17:16:44.375794   29751 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 17:16:44.375858   29751 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 17:16:44.375867   29751 kubeadm.go:310] 
	I0729 17:16:44.375948   29751 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 17:16:44.375957   29751 kubeadm.go:310] 
	I0729 17:16:44.376067   29751 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 17:16:44.376079   29751 kubeadm.go:310] 
	I0729 17:16:44.376125   29751 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 17:16:44.376213   29751 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 17:16:44.376284   29751 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 17:16:44.376294   29751 kubeadm.go:310] 
	I0729 17:16:44.376371   29751 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 17:16:44.376381   29751 kubeadm.go:310] 
	I0729 17:16:44.376446   29751 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 17:16:44.376459   29751 kubeadm.go:310] 
	I0729 17:16:44.376535   29751 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 17:16:44.376646   29751 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 17:16:44.376751   29751 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 17:16:44.376763   29751 kubeadm.go:310] 
	I0729 17:16:44.376875   29751 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 17:16:44.376971   29751 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 17:16:44.376987   29751 kubeadm.go:310] 
	I0729 17:16:44.377089   29751 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ccbc6e.3vl1qmuqbu37bz1a \
	I0729 17:16:44.377215   29751 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 17:16:44.377235   29751 kubeadm.go:310] 	--control-plane 
	I0729 17:16:44.377241   29751 kubeadm.go:310] 
	I0729 17:16:44.377308   29751 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 17:16:44.377316   29751 kubeadm.go:310] 
	I0729 17:16:44.377384   29751 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ccbc6e.3vl1qmuqbu37bz1a \
	I0729 17:16:44.377490   29751 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 17:16:44.377502   29751 cni.go:84] Creating CNI manager for ""
	I0729 17:16:44.377507   29751 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 17:16:44.379813   29751 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 17:16:44.380988   29751 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 17:16:44.386811   29751 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 17:16:44.386828   29751 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 17:16:44.406508   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 17:16:44.772578   29751 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 17:16:44.772637   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:44.772653   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-900414 minikube.k8s.io/updated_at=2024_07_29T17_16_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=ha-900414 minikube.k8s.io/primary=true
	I0729 17:16:44.815083   29751 ops.go:34] apiserver oom_adj: -16
	I0729 17:16:44.954862   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:45.455907   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:45.955132   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:46.455775   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:46.955157   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:47.454942   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:47.955856   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:48.455120   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:48.955010   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:49.455369   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:49.955570   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:50.455913   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:50.955887   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:51.455267   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:51.955546   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:52.455700   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:52.955656   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:53.455646   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:53.955585   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:54.455734   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:54.955004   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:55.455549   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:55.955280   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:56.455160   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:56.955292   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:57.455205   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:57.610197   29751 kubeadm.go:1113] duration metric: took 12.83761215s to wait for elevateKubeSystemPrivileges
	I0729 17:16:57.610234   29751 kubeadm.go:394] duration metric: took 25.437623888s to StartCluster
	I0729 17:16:57.610256   29751 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:57.610345   29751 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 17:16:57.611225   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:57.611478   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 17:16:57.611490   29751 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:16:57.611514   29751 start.go:241] waiting for startup goroutines ...
	I0729 17:16:57.611522   29751 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 17:16:57.611579   29751 addons.go:69] Setting storage-provisioner=true in profile "ha-900414"
	I0729 17:16:57.611584   29751 addons.go:69] Setting default-storageclass=true in profile "ha-900414"
	I0729 17:16:57.611609   29751 addons.go:234] Setting addon storage-provisioner=true in "ha-900414"
	I0729 17:16:57.611639   29751 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:16:57.611674   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:16:57.611611   29751 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-900414"
	I0729 17:16:57.611997   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:16:57.612025   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:16:57.612044   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:16:57.612072   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:16:57.626933   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I0729 17:16:57.626966   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40123
	I0729 17:16:57.627401   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:16:57.627410   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:16:57.627909   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:16:57.627924   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:16:57.628051   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:16:57.628070   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:16:57.628246   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:16:57.628364   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:16:57.628489   29751 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:16:57.628845   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:16:57.628882   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:16:57.630437   29751 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 17:16:57.630640   29751 kapi.go:59] client config for ha-900414: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.crt", KeyFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key", CAFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 17:16:57.631041   29751 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 17:16:57.631181   29751 addons.go:234] Setting addon default-storageclass=true in "ha-900414"
	I0729 17:16:57.631211   29751 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:16:57.631431   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:16:57.631460   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:16:57.643727   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42127
	I0729 17:16:57.644309   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:16:57.644832   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:16:57.644855   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:16:57.644944   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46585
	I0729 17:16:57.645218   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:16:57.645270   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:16:57.645373   29751 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:16:57.645699   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:16:57.645717   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:16:57.646055   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:16:57.646502   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:16:57.646524   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:16:57.647190   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:57.649355   29751 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:16:57.650805   29751 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 17:16:57.650819   29751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 17:16:57.650833   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:57.654412   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:57.654806   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:57.654829   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:57.654971   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:57.655140   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:57.655314   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:57.655472   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:16:57.662001   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39219
	I0729 17:16:57.662355   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:16:57.662786   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:16:57.662809   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:16:57.663109   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:16:57.663291   29751 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:16:57.664562   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:57.664773   29751 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 17:16:57.664787   29751 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 17:16:57.664806   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:57.667300   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:57.667686   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:57.667712   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:57.667941   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:57.668099   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:57.668250   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:57.668378   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:16:57.767183   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 17:16:57.791168   29751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 17:16:57.847824   29751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 17:16:58.284156   29751 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 17:16:58.284193   29751 main.go:141] libmachine: Making call to close driver server
	I0729 17:16:58.284212   29751 main.go:141] libmachine: (ha-900414) Calling .Close
	I0729 17:16:58.284483   29751 main.go:141] libmachine: (ha-900414) DBG | Closing plugin on server side
	I0729 17:16:58.284516   29751 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:16:58.284528   29751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:16:58.284545   29751 main.go:141] libmachine: Making call to close driver server
	I0729 17:16:58.284554   29751 main.go:141] libmachine: (ha-900414) Calling .Close
	I0729 17:16:58.284794   29751 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:16:58.284807   29751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:16:58.284935   29751 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 17:16:58.284944   29751 round_trippers.go:469] Request Headers:
	I0729 17:16:58.284955   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:16:58.284962   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:16:58.295163   29751 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0729 17:16:58.295695   29751 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 17:16:58.295709   29751 round_trippers.go:469] Request Headers:
	I0729 17:16:58.295719   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:16:58.295728   29751 round_trippers.go:473]     Content-Type: application/json
	I0729 17:16:58.295732   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:16:58.303886   29751 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 17:16:58.304015   29751 main.go:141] libmachine: Making call to close driver server
	I0729 17:16:58.304025   29751 main.go:141] libmachine: (ha-900414) Calling .Close
	I0729 17:16:58.304276   29751 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:16:58.304295   29751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:16:58.304297   29751 main.go:141] libmachine: (ha-900414) DBG | Closing plugin on server side
	I0729 17:16:58.515757   29751 main.go:141] libmachine: Making call to close driver server
	I0729 17:16:58.515781   29751 main.go:141] libmachine: (ha-900414) Calling .Close
	I0729 17:16:58.516040   29751 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:16:58.516057   29751 main.go:141] libmachine: (ha-900414) DBG | Closing plugin on server side
	I0729 17:16:58.516062   29751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:16:58.516072   29751 main.go:141] libmachine: Making call to close driver server
	I0729 17:16:58.516079   29751 main.go:141] libmachine: (ha-900414) Calling .Close
	I0729 17:16:58.516303   29751 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:16:58.516319   29751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:16:58.516321   29751 main.go:141] libmachine: (ha-900414) DBG | Closing plugin on server side
	I0729 17:16:58.518110   29751 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0729 17:16:58.519224   29751 addons.go:510] duration metric: took 907.699792ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0729 17:16:58.519258   29751 start.go:246] waiting for cluster config update ...
	I0729 17:16:58.519272   29751 start.go:255] writing updated cluster config ...
	I0729 17:16:58.520741   29751 out.go:177] 
	I0729 17:16:58.521901   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:16:58.521968   29751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:16:58.523471   29751 out.go:177] * Starting "ha-900414-m02" control-plane node in "ha-900414" cluster
	I0729 17:16:58.524524   29751 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:16:58.524544   29751 cache.go:56] Caching tarball of preloaded images
	I0729 17:16:58.524616   29751 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:16:58.524628   29751 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:16:58.524682   29751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:16:58.524829   29751 start.go:360] acquireMachinesLock for ha-900414-m02: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:16:58.524877   29751 start.go:364] duration metric: took 31.635µs to acquireMachinesLock for "ha-900414-m02"
	I0729 17:16:58.524893   29751 start.go:93] Provisioning new machine with config: &{Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:16:58.524954   29751 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 17:16:58.526343   29751 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:16:58.526429   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:16:58.526451   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:16:58.540615   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45315
	I0729 17:16:58.541056   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:16:58.541504   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:16:58.541524   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:16:58.541833   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:16:58.542024   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetMachineName
	I0729 17:16:58.542159   29751 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:16:58.542309   29751 start.go:159] libmachine.API.Create for "ha-900414" (driver="kvm2")
	I0729 17:16:58.542331   29751 client.go:168] LocalClient.Create starting
	I0729 17:16:58.542373   29751 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem
	I0729 17:16:58.542415   29751 main.go:141] libmachine: Decoding PEM data...
	I0729 17:16:58.542436   29751 main.go:141] libmachine: Parsing certificate...
	I0729 17:16:58.542499   29751 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem
	I0729 17:16:58.542526   29751 main.go:141] libmachine: Decoding PEM data...
	I0729 17:16:58.542541   29751 main.go:141] libmachine: Parsing certificate...
	I0729 17:16:58.542572   29751 main.go:141] libmachine: Running pre-create checks...
	I0729 17:16:58.542583   29751 main.go:141] libmachine: (ha-900414-m02) Calling .PreCreateCheck
	I0729 17:16:58.542728   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetConfigRaw
	I0729 17:16:58.543136   29751 main.go:141] libmachine: Creating machine...
	I0729 17:16:58.543150   29751 main.go:141] libmachine: (ha-900414-m02) Calling .Create
	I0729 17:16:58.543292   29751 main.go:141] libmachine: (ha-900414-m02) Creating KVM machine...
	I0729 17:16:58.544385   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found existing default KVM network
	I0729 17:16:58.544525   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found existing private KVM network mk-ha-900414
	I0729 17:16:58.544645   29751 main.go:141] libmachine: (ha-900414-m02) Setting up store path in /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02 ...
	I0729 17:16:58.544668   29751 main.go:141] libmachine: (ha-900414-m02) Building disk image from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 17:16:58.544721   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:16:58.544636   30168 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:16:58.544853   29751 main.go:141] libmachine: (ha-900414-m02) Downloading /home/jenkins/minikube-integration/19345-11206/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 17:16:58.772906   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:16:58.772779   30168 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa...
	I0729 17:16:58.905768   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:16:58.905649   30168 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/ha-900414-m02.rawdisk...
	I0729 17:16:58.905810   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Writing magic tar header
	I0729 17:16:58.905864   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Writing SSH key tar header
	I0729 17:16:58.905897   29751 main.go:141] libmachine: (ha-900414-m02) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02 (perms=drwx------)
	I0729 17:16:58.905914   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:16:58.905754   30168 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02 ...
	I0729 17:16:58.905938   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02
	I0729 17:16:58.905958   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines
	I0729 17:16:58.905972   29751 main.go:141] libmachine: (ha-900414-m02) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines (perms=drwxr-xr-x)
	I0729 17:16:58.905986   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:16:58.905998   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206
	I0729 17:16:58.906003   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 17:16:58.906018   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 17:16:58.906026   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Checking permissions on dir: /home
	I0729 17:16:58.906040   29751 main.go:141] libmachine: (ha-900414-m02) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube (perms=drwxr-xr-x)
	I0729 17:16:58.906057   29751 main.go:141] libmachine: (ha-900414-m02) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206 (perms=drwxrwxr-x)
	I0729 17:16:58.906069   29751 main.go:141] libmachine: (ha-900414-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 17:16:58.906080   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Skipping /home - not owner
	I0729 17:16:58.906093   29751 main.go:141] libmachine: (ha-900414-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 17:16:58.906100   29751 main.go:141] libmachine: (ha-900414-m02) Creating domain...
	I0729 17:16:58.906988   29751 main.go:141] libmachine: (ha-900414-m02) define libvirt domain using xml: 
	I0729 17:16:58.907009   29751 main.go:141] libmachine: (ha-900414-m02) <domain type='kvm'>
	I0729 17:16:58.907018   29751 main.go:141] libmachine: (ha-900414-m02)   <name>ha-900414-m02</name>
	I0729 17:16:58.907026   29751 main.go:141] libmachine: (ha-900414-m02)   <memory unit='MiB'>2200</memory>
	I0729 17:16:58.907036   29751 main.go:141] libmachine: (ha-900414-m02)   <vcpu>2</vcpu>
	I0729 17:16:58.907048   29751 main.go:141] libmachine: (ha-900414-m02)   <features>
	I0729 17:16:58.907056   29751 main.go:141] libmachine: (ha-900414-m02)     <acpi/>
	I0729 17:16:58.907063   29751 main.go:141] libmachine: (ha-900414-m02)     <apic/>
	I0729 17:16:58.907075   29751 main.go:141] libmachine: (ha-900414-m02)     <pae/>
	I0729 17:16:58.907089   29751 main.go:141] libmachine: (ha-900414-m02)     
	I0729 17:16:58.907096   29751 main.go:141] libmachine: (ha-900414-m02)   </features>
	I0729 17:16:58.907107   29751 main.go:141] libmachine: (ha-900414-m02)   <cpu mode='host-passthrough'>
	I0729 17:16:58.907117   29751 main.go:141] libmachine: (ha-900414-m02)   
	I0729 17:16:58.907126   29751 main.go:141] libmachine: (ha-900414-m02)   </cpu>
	I0729 17:16:58.907138   29751 main.go:141] libmachine: (ha-900414-m02)   <os>
	I0729 17:16:58.907144   29751 main.go:141] libmachine: (ha-900414-m02)     <type>hvm</type>
	I0729 17:16:58.907164   29751 main.go:141] libmachine: (ha-900414-m02)     <boot dev='cdrom'/>
	I0729 17:16:58.907177   29751 main.go:141] libmachine: (ha-900414-m02)     <boot dev='hd'/>
	I0729 17:16:58.907187   29751 main.go:141] libmachine: (ha-900414-m02)     <bootmenu enable='no'/>
	I0729 17:16:58.907197   29751 main.go:141] libmachine: (ha-900414-m02)   </os>
	I0729 17:16:58.907208   29751 main.go:141] libmachine: (ha-900414-m02)   <devices>
	I0729 17:16:58.907219   29751 main.go:141] libmachine: (ha-900414-m02)     <disk type='file' device='cdrom'>
	I0729 17:16:58.907236   29751 main.go:141] libmachine: (ha-900414-m02)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/boot2docker.iso'/>
	I0729 17:16:58.907247   29751 main.go:141] libmachine: (ha-900414-m02)       <target dev='hdc' bus='scsi'/>
	I0729 17:16:58.907269   29751 main.go:141] libmachine: (ha-900414-m02)       <readonly/>
	I0729 17:16:58.907287   29751 main.go:141] libmachine: (ha-900414-m02)     </disk>
	I0729 17:16:58.907297   29751 main.go:141] libmachine: (ha-900414-m02)     <disk type='file' device='disk'>
	I0729 17:16:58.907314   29751 main.go:141] libmachine: (ha-900414-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 17:16:58.907330   29751 main.go:141] libmachine: (ha-900414-m02)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/ha-900414-m02.rawdisk'/>
	I0729 17:16:58.907341   29751 main.go:141] libmachine: (ha-900414-m02)       <target dev='hda' bus='virtio'/>
	I0729 17:16:58.907352   29751 main.go:141] libmachine: (ha-900414-m02)     </disk>
	I0729 17:16:58.907363   29751 main.go:141] libmachine: (ha-900414-m02)     <interface type='network'>
	I0729 17:16:58.907372   29751 main.go:141] libmachine: (ha-900414-m02)       <source network='mk-ha-900414'/>
	I0729 17:16:58.907382   29751 main.go:141] libmachine: (ha-900414-m02)       <model type='virtio'/>
	I0729 17:16:58.907393   29751 main.go:141] libmachine: (ha-900414-m02)     </interface>
	I0729 17:16:58.907408   29751 main.go:141] libmachine: (ha-900414-m02)     <interface type='network'>
	I0729 17:16:58.907420   29751 main.go:141] libmachine: (ha-900414-m02)       <source network='default'/>
	I0729 17:16:58.907428   29751 main.go:141] libmachine: (ha-900414-m02)       <model type='virtio'/>
	I0729 17:16:58.907438   29751 main.go:141] libmachine: (ha-900414-m02)     </interface>
	I0729 17:16:58.907450   29751 main.go:141] libmachine: (ha-900414-m02)     <serial type='pty'>
	I0729 17:16:58.907459   29751 main.go:141] libmachine: (ha-900414-m02)       <target port='0'/>
	I0729 17:16:58.907468   29751 main.go:141] libmachine: (ha-900414-m02)     </serial>
	I0729 17:16:58.907479   29751 main.go:141] libmachine: (ha-900414-m02)     <console type='pty'>
	I0729 17:16:58.907493   29751 main.go:141] libmachine: (ha-900414-m02)       <target type='serial' port='0'/>
	I0729 17:16:58.907505   29751 main.go:141] libmachine: (ha-900414-m02)     </console>
	I0729 17:16:58.907515   29751 main.go:141] libmachine: (ha-900414-m02)     <rng model='virtio'>
	I0729 17:16:58.907526   29751 main.go:141] libmachine: (ha-900414-m02)       <backend model='random'>/dev/random</backend>
	I0729 17:16:58.907536   29751 main.go:141] libmachine: (ha-900414-m02)     </rng>
	I0729 17:16:58.907542   29751 main.go:141] libmachine: (ha-900414-m02)     
	I0729 17:16:58.907548   29751 main.go:141] libmachine: (ha-900414-m02)     
	I0729 17:16:58.907558   29751 main.go:141] libmachine: (ha-900414-m02)   </devices>
	I0729 17:16:58.907567   29751 main.go:141] libmachine: (ha-900414-m02) </domain>
	I0729 17:16:58.907593   29751 main.go:141] libmachine: (ha-900414-m02) 
	I0729 17:16:58.913793   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:d1:4d:17 in network default
	I0729 17:16:58.914411   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:16:58.914427   29751 main.go:141] libmachine: (ha-900414-m02) Ensuring networks are active...
	I0729 17:16:58.915106   29751 main.go:141] libmachine: (ha-900414-m02) Ensuring network default is active
	I0729 17:16:58.915393   29751 main.go:141] libmachine: (ha-900414-m02) Ensuring network mk-ha-900414 is active
	I0729 17:16:58.915695   29751 main.go:141] libmachine: (ha-900414-m02) Getting domain xml...
	I0729 17:16:58.916367   29751 main.go:141] libmachine: (ha-900414-m02) Creating domain...
	I0729 17:17:00.619006   29751 main.go:141] libmachine: (ha-900414-m02) Waiting to get IP...
	I0729 17:17:00.619723   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:00.620118   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:00.620142   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:00.620098   30168 retry.go:31] will retry after 188.399655ms: waiting for machine to come up
	I0729 17:17:00.810510   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:00.811048   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:00.811072   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:00.810990   30168 retry.go:31] will retry after 292.630472ms: waiting for machine to come up
	I0729 17:17:01.105586   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:01.106002   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:01.106039   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:01.105964   30168 retry.go:31] will retry after 319.398962ms: waiting for machine to come up
	I0729 17:17:01.428994   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:01.429539   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:01.429566   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:01.429502   30168 retry.go:31] will retry after 464.509758ms: waiting for machine to come up
	I0729 17:17:01.895053   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:01.895562   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:01.895592   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:01.895517   30168 retry.go:31] will retry after 484.399614ms: waiting for machine to come up
	I0729 17:17:02.381074   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:02.381631   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:02.381686   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:02.381606   30168 retry.go:31] will retry after 860.971027ms: waiting for machine to come up
	I0729 17:17:03.243726   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:03.244282   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:03.244341   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:03.244265   30168 retry.go:31] will retry after 863.225264ms: waiting for machine to come up
	I0729 17:17:04.108705   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:04.109216   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:04.109244   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:04.109172   30168 retry.go:31] will retry after 1.020483871s: waiting for machine to come up
	I0729 17:17:05.131433   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:05.131910   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:05.131935   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:05.131848   30168 retry.go:31] will retry after 1.375261619s: waiting for machine to come up
	I0729 17:17:06.509382   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:06.509825   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:06.509852   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:06.509790   30168 retry.go:31] will retry after 2.25713359s: waiting for machine to come up
	I0729 17:17:08.768596   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:08.769231   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:08.769260   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:08.769187   30168 retry.go:31] will retry after 2.235550458s: waiting for machine to come up
	I0729 17:17:11.007553   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:11.008004   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:11.008021   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:11.007976   30168 retry.go:31] will retry after 2.417813916s: waiting for machine to come up
	I0729 17:17:13.427492   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:13.427953   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:13.427980   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:13.427908   30168 retry.go:31] will retry after 4.370715986s: waiting for machine to come up
	I0729 17:17:17.803728   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:17.804160   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:17.804188   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:17.804120   30168 retry.go:31] will retry after 3.853692825s: waiting for machine to come up
	I0729 17:17:21.659016   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:21.659460   29751 main.go:141] libmachine: (ha-900414-m02) Found IP for machine: 192.168.39.111
	I0729 17:17:21.659486   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:21.659495   29751 main.go:141] libmachine: (ha-900414-m02) Reserving static IP address...
	I0729 17:17:21.659832   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find host DHCP lease matching {name: "ha-900414-m02", mac: "52:54:00:a0:84:83", ip: "192.168.39.111"} in network mk-ha-900414
	I0729 17:17:21.731281   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Getting to WaitForSSH function...
	I0729 17:17:21.731311   29751 main.go:141] libmachine: (ha-900414-m02) Reserved static IP address: 192.168.39.111
	I0729 17:17:21.731324   29751 main.go:141] libmachine: (ha-900414-m02) Waiting for SSH to be available...
	I0729 17:17:21.733654   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:21.734150   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:21.734177   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:21.734279   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Using SSH client type: external
	I0729 17:17:21.734303   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa (-rw-------)
	I0729 17:17:21.734329   29751 main.go:141] libmachine: (ha-900414-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:17:21.734342   29751 main.go:141] libmachine: (ha-900414-m02) DBG | About to run SSH command:
	I0729 17:17:21.734371   29751 main.go:141] libmachine: (ha-900414-m02) DBG | exit 0
	I0729 17:17:21.854563   29751 main.go:141] libmachine: (ha-900414-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 17:17:21.854851   29751 main.go:141] libmachine: (ha-900414-m02) KVM machine creation complete!
	I0729 17:17:21.855119   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetConfigRaw
	I0729 17:17:21.855699   29751 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:17:21.855898   29751 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:17:21.856085   29751 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 17:17:21.856101   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetState
	I0729 17:17:21.857273   29751 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 17:17:21.857288   29751 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 17:17:21.857296   29751 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 17:17:21.857303   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:21.859622   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:21.860022   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:21.860050   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:21.860171   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:21.860343   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:21.860499   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:21.860656   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:21.860857   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:17:21.861112   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0729 17:17:21.861133   29751 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 17:17:21.957577   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:17:21.957598   29751 main.go:141] libmachine: Detecting the provisioner...
	I0729 17:17:21.957609   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:21.960289   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:21.960658   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:21.960683   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:21.960879   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:21.961042   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:21.961190   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:21.961335   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:21.961486   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:17:21.961640   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0729 17:17:21.961651   29751 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 17:17:22.059224   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 17:17:22.059299   29751 main.go:141] libmachine: found compatible host: buildroot
	I0729 17:17:22.059309   29751 main.go:141] libmachine: Provisioning with buildroot...
	I0729 17:17:22.059317   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetMachineName
	I0729 17:17:22.059537   29751 buildroot.go:166] provisioning hostname "ha-900414-m02"
	I0729 17:17:22.059562   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetMachineName
	I0729 17:17:22.059774   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:22.062185   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.062523   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:22.062550   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.062672   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:22.062834   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:22.062990   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:22.063094   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:22.063260   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:17:22.063416   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0729 17:17:22.063426   29751 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-900414-m02 && echo "ha-900414-m02" | sudo tee /etc/hostname
	I0729 17:17:22.180800   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-900414-m02
	
	I0729 17:17:22.180830   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:22.183377   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.183784   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:22.183811   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.183965   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:22.184142   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:22.184301   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:22.184440   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:22.184599   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:17:22.184750   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0729 17:17:22.184765   29751 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-900414-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-900414-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-900414-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:17:22.291289   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:17:22.291314   29751 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 17:17:22.291333   29751 buildroot.go:174] setting up certificates
	I0729 17:17:22.291344   29751 provision.go:84] configureAuth start
	I0729 17:17:22.291355   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetMachineName
	I0729 17:17:22.291638   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetIP
	I0729 17:17:22.294329   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.294679   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:22.294704   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.294918   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:22.297031   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.297352   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:22.297378   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.297495   29751 provision.go:143] copyHostCerts
	I0729 17:17:22.297526   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:17:22.297566   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 17:17:22.297575   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:17:22.297645   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 17:17:22.297747   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:17:22.297772   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 17:17:22.297785   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:17:22.297829   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 17:17:22.297899   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:17:22.297923   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 17:17:22.297933   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:17:22.297974   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 17:17:22.298039   29751 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.ha-900414-m02 san=[127.0.0.1 192.168.39.111 ha-900414-m02 localhost minikube]
	I0729 17:17:22.640633   29751 provision.go:177] copyRemoteCerts
	I0729 17:17:22.640687   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:17:22.640711   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:22.643109   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.643428   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:22.643467   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.643664   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:22.643876   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:22.644058   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:22.644207   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa Username:docker}
	I0729 17:17:22.724646   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:17:22.724714   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 17:17:22.752409   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:17:22.752479   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:17:22.776188   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:17:22.776242   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 17:17:22.799494   29751 provision.go:87] duration metric: took 508.139423ms to configureAuth
	I0729 17:17:22.799514   29751 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:17:22.799685   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:17:22.799762   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:22.802126   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.802649   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:22.802687   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.802906   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:22.803153   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:22.803310   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:22.803458   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:22.803590   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:17:22.803750   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0729 17:17:22.803769   29751 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:17:23.070079   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:17:23.070100   29751 main.go:141] libmachine: Checking connection to Docker...
	I0729 17:17:23.070108   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetURL
	I0729 17:17:23.071565   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Using libvirt version 6000000
	I0729 17:17:23.073565   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.073827   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:23.073851   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.074009   29751 main.go:141] libmachine: Docker is up and running!
	I0729 17:17:23.074026   29751 main.go:141] libmachine: Reticulating splines...
	I0729 17:17:23.074032   29751 client.go:171] duration metric: took 24.53169471s to LocalClient.Create
	I0729 17:17:23.074049   29751 start.go:167] duration metric: took 24.531741976s to libmachine.API.Create "ha-900414"
	I0729 17:17:23.074058   29751 start.go:293] postStartSetup for "ha-900414-m02" (driver="kvm2")
	I0729 17:17:23.074067   29751 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:17:23.074090   29751 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:17:23.074354   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:17:23.074415   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:23.076745   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.077166   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:23.077200   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.077379   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:23.077584   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:23.077742   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:23.077886   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa Username:docker}
	I0729 17:17:23.156548   29751 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:17:23.160575   29751 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:17:23.160600   29751 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 17:17:23.160663   29751 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 17:17:23.160750   29751 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 17:17:23.160764   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /etc/ssl/certs/183932.pem
	I0729 17:17:23.160882   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:17:23.169949   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:17:23.192728   29751 start.go:296] duration metric: took 118.657263ms for postStartSetup
	I0729 17:17:23.192802   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetConfigRaw
	I0729 17:17:23.193367   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetIP
	I0729 17:17:23.195993   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.196313   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:23.196341   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.196551   29751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:17:23.196742   29751 start.go:128] duration metric: took 24.671779175s to createHost
	I0729 17:17:23.196763   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:23.199094   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.199406   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:23.199432   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.199623   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:23.199825   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:23.199972   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:23.200100   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:23.200263   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:17:23.200432   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0729 17:17:23.200447   29751 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:17:23.298817   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722273443.257470758
	
	I0729 17:17:23.298841   29751 fix.go:216] guest clock: 1722273443.257470758
	I0729 17:17:23.298849   29751 fix.go:229] Guest: 2024-07-29 17:17:23.257470758 +0000 UTC Remote: 2024-07-29 17:17:23.196753922 +0000 UTC m=+83.553013806 (delta=60.716836ms)
	I0729 17:17:23.298873   29751 fix.go:200] guest clock delta is within tolerance: 60.716836ms
	I0729 17:17:23.298878   29751 start.go:83] releasing machines lock for "ha-900414-m02", held for 24.773992971s
	I0729 17:17:23.298896   29751 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:17:23.299203   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetIP
	I0729 17:17:23.301678   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.302011   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:23.302039   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.304375   29751 out.go:177] * Found network options:
	I0729 17:17:23.305631   29751 out.go:177]   - NO_PROXY=192.168.39.114
	W0729 17:17:23.306797   29751 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 17:17:23.306829   29751 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:17:23.307291   29751 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:17:23.307456   29751 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:17:23.307517   29751 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:17:23.307561   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	W0729 17:17:23.307639   29751 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 17:17:23.307724   29751 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:17:23.307744   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:23.310211   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.310603   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:23.310631   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.310672   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.310757   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:23.310902   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:23.311070   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:23.311107   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:23.311139   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.311199   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa Username:docker}
	I0729 17:17:23.311276   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:23.311431   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:23.311588   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:23.311721   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa Username:docker}
	I0729 17:17:23.542543   29751 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:17:23.548943   29751 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:17:23.549006   29751 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:17:23.565708   29751 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 17:17:23.565731   29751 start.go:495] detecting cgroup driver to use...
	I0729 17:17:23.565799   29751 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:17:23.582146   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:17:23.595882   29751 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:17:23.595932   29751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:17:23.609950   29751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:17:23.623881   29751 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:17:23.740433   29751 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:17:23.890709   29751 docker.go:233] disabling docker service ...
	I0729 17:17:23.890793   29751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:17:23.905201   29751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:17:23.918576   29751 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:17:24.057759   29751 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:17:24.165940   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:17:24.180233   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:17:24.198828   29751 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:17:24.198905   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:17:24.209360   29751 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:17:24.209411   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:17:24.219742   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:17:24.229772   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:17:24.239876   29751 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:17:24.250101   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:17:24.260133   29751 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:17:24.276593   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:17:24.286837   29751 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:17:24.295829   29751 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 17:17:24.295882   29751 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 17:17:24.308286   29751 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:17:24.317390   29751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:17:24.438381   29751 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:17:24.575355   29751 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:17:24.575427   29751 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:17:24.580382   29751 start.go:563] Will wait 60s for crictl version
	I0729 17:17:24.580435   29751 ssh_runner.go:195] Run: which crictl
	I0729 17:17:24.584163   29751 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:17:24.623041   29751 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:17:24.623126   29751 ssh_runner.go:195] Run: crio --version
	I0729 17:17:24.651578   29751 ssh_runner.go:195] Run: crio --version
	I0729 17:17:24.679198   29751 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:17:24.680823   29751 out.go:177]   - env NO_PROXY=192.168.39.114
	I0729 17:17:24.681949   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetIP
	I0729 17:17:24.684319   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:24.684652   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:24.684678   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:24.684857   29751 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:17:24.689245   29751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:17:24.702628   29751 mustload.go:65] Loading cluster: ha-900414
	I0729 17:17:24.702858   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:17:24.703235   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:17:24.703267   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:17:24.718581   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40361
	I0729 17:17:24.719166   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:17:24.719615   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:17:24.719632   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:17:24.719974   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:17:24.720165   29751 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:17:24.721752   29751 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:17:24.722123   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:17:24.722164   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:17:24.736191   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46521
	I0729 17:17:24.736563   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:17:24.736978   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:17:24.737000   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:17:24.737303   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:17:24.737480   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:17:24.737637   29751 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414 for IP: 192.168.39.111
	I0729 17:17:24.737652   29751 certs.go:194] generating shared ca certs ...
	I0729 17:17:24.737673   29751 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:17:24.737822   29751 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 17:17:24.737875   29751 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 17:17:24.737886   29751 certs.go:256] generating profile certs ...
	I0729 17:17:24.737954   29751 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key
	I0729 17:17:24.737981   29751 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.6155267f
	I0729 17:17:24.737997   29751 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.6155267f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.111 192.168.39.254]
	I0729 17:17:24.872649   29751 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.6155267f ...
	I0729 17:17:24.872681   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.6155267f: {Name:mkd7e35496498bf0055f677e97a30422901015d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:17:24.872892   29751 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.6155267f ...
	I0729 17:17:24.872910   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.6155267f: {Name:mk12b1d3199513cca10afd617c4d659c36c472c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:17:24.873035   29751 certs.go:381] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.6155267f -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt
	I0729 17:17:24.873187   29751 certs.go:385] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.6155267f -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key
	I0729 17:17:24.873322   29751 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key
	I0729 17:17:24.873336   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 17:17:24.873350   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 17:17:24.873365   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 17:17:24.873381   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 17:17:24.873396   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 17:17:24.873411   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 17:17:24.873425   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 17:17:24.873436   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 17:17:24.873492   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 17:17:24.873523   29751 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 17:17:24.873533   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:17:24.873564   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:17:24.873591   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:17:24.873617   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 17:17:24.873659   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:17:24.873721   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:17:24.873743   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem -> /usr/share/ca-certificates/18393.pem
	I0729 17:17:24.873763   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /usr/share/ca-certificates/183932.pem
	I0729 17:17:24.873808   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:17:24.877514   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:17:24.878013   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:17:24.878037   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:17:24.878240   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:17:24.878469   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:17:24.878624   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:17:24.878788   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:17:24.958760   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 17:17:24.963511   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 17:17:24.977560   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 17:17:24.983710   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0729 17:17:24.993985   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 17:17:24.998592   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 17:17:25.008591   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 17:17:25.012811   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0729 17:17:25.022725   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 17:17:25.026862   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 17:17:25.037182   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 17:17:25.041425   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 17:17:25.052018   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:17:25.078408   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:17:25.103576   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:17:25.128493   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:17:25.152760   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 17:17:25.176177   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 17:17:25.199440   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:17:25.222882   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:17:25.247585   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:17:25.271558   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 17:17:25.294940   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 17:17:25.318779   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 17:17:25.335350   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0729 17:17:25.351545   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 17:17:25.367375   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0729 17:17:25.383568   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 17:17:25.401037   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 17:17:25.418529   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 17:17:25.436375   29751 ssh_runner.go:195] Run: openssl version
	I0729 17:17:25.442117   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:17:25.452592   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:17:25.457023   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:17:25.457071   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:17:25.463311   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:17:25.473973   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 17:17:25.484597   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 17:17:25.488960   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 17:17:25.489009   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 17:17:25.494838   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 17:17:25.508665   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 17:17:25.519756   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 17:17:25.524362   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 17:17:25.524400   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 17:17:25.529987   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:17:25.540158   29751 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:17:25.544233   29751 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 17:17:25.544291   29751 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.30.3 crio true true} ...
	I0729 17:17:25.544369   29751 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-900414-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:17:25.544392   29751 kube-vip.go:115] generating kube-vip config ...
	I0729 17:17:25.544425   29751 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 17:17:25.561651   29751 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 17:17:25.561720   29751 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 17:17:25.561780   29751 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:17:25.573335   29751 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 17:17:25.573403   29751 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 17:17:25.584187   29751 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 17:17:25.584215   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 17:17:25.584277   29751 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 17:17:25.584298   29751 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 17:17:25.584317   29751 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 17:17:25.589102   29751 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 17:17:25.589127   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 17:17:26.426450   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:17:26.440486   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 17:17:26.440581   29751 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 17:17:26.444763   29751 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 17:17:26.444796   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 17:17:28.867115   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 17:17:28.867192   29751 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 17:17:28.872102   29751 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 17:17:28.872129   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 17:17:29.084956   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 17:17:29.094366   29751 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 17:17:29.110915   29751 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:17:29.126950   29751 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 17:17:29.143874   29751 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 17:17:29.148322   29751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:17:29.161396   29751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:17:29.288462   29751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:17:29.306423   29751 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:17:29.306884   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:17:29.306935   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:17:29.321781   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33867
	I0729 17:17:29.322256   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:17:29.322797   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:17:29.322822   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:17:29.323144   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:17:29.323324   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:17:29.323436   29751 start.go:317] joinCluster: &{Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:17:29.323528   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 17:17:29.323548   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:17:29.326494   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:17:29.326884   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:17:29.326913   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:17:29.327089   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:17:29.327357   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:17:29.327543   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:17:29.327687   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:17:29.484457   29751 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:17:29.484526   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dg0ylq.mtxxgcl7pxnl45i3 --discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-900414-m02 --control-plane --apiserver-advertise-address=192.168.39.111 --apiserver-bind-port=8443"
	I0729 17:17:54.621435   29751 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dg0ylq.mtxxgcl7pxnl45i3 --discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-900414-m02 --control-plane --apiserver-advertise-address=192.168.39.111 --apiserver-bind-port=8443": (25.136878808s)
	I0729 17:17:54.621488   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 17:17:55.218635   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-900414-m02 minikube.k8s.io/updated_at=2024_07_29T17_17_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=ha-900414 minikube.k8s.io/primary=false
	I0729 17:17:55.355904   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-900414-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 17:17:55.491251   29751 start.go:319] duration metric: took 26.167808458s to joinCluster
	I0729 17:17:55.491328   29751 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:17:55.491643   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:17:55.493003   29751 out.go:177] * Verifying Kubernetes components...
	I0729 17:17:55.494406   29751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:17:55.854120   29751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:17:55.886164   29751 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 17:17:55.886408   29751 kapi.go:59] client config for ha-900414: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.crt", KeyFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key", CAFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 17:17:55.886474   29751 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.114:8443
	I0729 17:17:55.886694   29751 node_ready.go:35] waiting up to 6m0s for node "ha-900414-m02" to be "Ready" ...
	I0729 17:17:55.886787   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:55.886794   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:55.886801   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:55.886804   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:55.908576   29751 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0729 17:17:56.387630   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:56.387659   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:56.387671   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:56.387680   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:56.393305   29751 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:17:56.887572   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:56.887591   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:56.887599   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:56.887605   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:56.891861   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:17:57.387556   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:57.387575   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:57.387584   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:57.387588   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:57.390769   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:17:57.886976   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:57.887008   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:57.887016   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:57.887022   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:57.890215   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:17:57.890674   29751 node_ready.go:53] node "ha-900414-m02" has status "Ready":"False"
	I0729 17:17:58.387248   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:58.387271   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:58.387281   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:58.387286   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:58.390520   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:17:58.887094   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:58.887124   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:58.887135   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:58.887141   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:58.890309   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:17:59.387577   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:59.387596   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:59.387604   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:59.387610   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:59.390735   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:17:59.887548   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:59.887569   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:59.887577   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:59.887581   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:59.890851   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:17:59.891615   29751 node_ready.go:53] node "ha-900414-m02" has status "Ready":"False"
	I0729 17:18:00.386953   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:00.386977   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:00.386985   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:00.386988   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:00.390166   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:00.887111   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:00.887130   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:00.887144   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:00.887149   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:00.890789   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:01.387760   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:01.387789   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:01.387801   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:01.387809   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:01.390717   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:18:01.887627   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:01.887650   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:01.887662   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:01.887666   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:01.891414   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:01.892166   29751 node_ready.go:53] node "ha-900414-m02" has status "Ready":"False"
	I0729 17:18:02.387147   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:02.387171   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:02.387181   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:02.387187   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:02.391744   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:18:02.887130   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:02.887150   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:02.887158   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:02.887165   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:02.891177   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:03.387218   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:03.387242   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:03.387255   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:03.387261   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:03.391546   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:18:03.887097   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:03.887119   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:03.887128   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:03.887131   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:03.891034   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:04.386829   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:04.386868   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:04.386877   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:04.386881   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:04.390484   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:04.391532   29751 node_ready.go:53] node "ha-900414-m02" has status "Ready":"False"
	I0729 17:18:04.887415   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:04.887437   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:04.887445   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:04.887449   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:04.891190   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:05.387227   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:05.387246   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:05.387254   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:05.387258   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:05.392017   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:18:05.887597   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:05.887620   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:05.887627   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:05.887630   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:05.890989   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:06.386930   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:06.386953   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:06.386961   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:06.386964   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:06.390026   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:06.887322   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:06.887344   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:06.887352   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:06.887355   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:06.890903   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:06.891449   29751 node_ready.go:53] node "ha-900414-m02" has status "Ready":"False"
	I0729 17:18:07.387915   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:07.387941   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:07.387954   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:07.387962   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:07.392125   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:18:07.886969   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:07.886991   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:07.886999   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:07.887006   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:07.890141   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:08.387240   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:08.387261   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:08.387269   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:08.387275   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:08.389917   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:18:08.887289   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:08.887313   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:08.887321   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:08.887324   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:08.890473   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:09.387176   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:09.387197   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.387209   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.387215   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.390670   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:09.391446   29751 node_ready.go:53] node "ha-900414-m02" has status "Ready":"False"
	I0729 17:18:09.887268   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:09.887288   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.887296   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.887301   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.890603   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:09.891231   29751 node_ready.go:49] node "ha-900414-m02" has status "Ready":"True"
	I0729 17:18:09.891246   29751 node_ready.go:38] duration metric: took 14.004538508s for node "ha-900414-m02" to be "Ready" ...
	I0729 17:18:09.891257   29751 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:18:09.891316   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:18:09.891327   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.891337   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.891344   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.897157   29751 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:18:09.903216   29751 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-48j6w" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:09.903304   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-48j6w
	I0729 17:18:09.903315   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.903325   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.903334   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.906411   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:09.907082   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:09.907101   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.907111   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.907116   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.910206   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:09.910796   29751 pod_ready.go:92] pod "coredns-7db6d8ff4d-48j6w" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:09.910817   29751 pod_ready.go:81] duration metric: took 7.577654ms for pod "coredns-7db6d8ff4d-48j6w" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:09.910829   29751 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9r87x" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:09.910889   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9r87x
	I0729 17:18:09.910897   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.910905   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.910910   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.915990   29751 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:18:09.916658   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:09.916675   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.916682   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.916688   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.919834   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:09.920446   29751 pod_ready.go:92] pod "coredns-7db6d8ff4d-9r87x" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:09.920463   29751 pod_ready.go:81] duration metric: took 9.626107ms for pod "coredns-7db6d8ff4d-9r87x" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:09.920473   29751 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:09.920525   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-900414
	I0729 17:18:09.920535   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.920545   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.920553   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.925501   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:18:09.926713   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:09.926725   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.926735   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.926740   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.932445   29751 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:18:09.933590   29751 pod_ready.go:92] pod "etcd-ha-900414" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:09.933607   29751 pod_ready.go:81] duration metric: took 13.127022ms for pod "etcd-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:09.933618   29751 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:09.933669   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-900414-m02
	I0729 17:18:09.933681   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.933690   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.933698   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.937108   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:09.937740   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:09.937754   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.937763   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.937769   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.940278   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:18:09.940802   29751 pod_ready.go:92] pod "etcd-ha-900414-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:09.940814   29751 pod_ready.go:81] duration metric: took 7.189004ms for pod "etcd-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:09.940831   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:10.088170   29751 request.go:629] Waited for 147.28026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414
	I0729 17:18:10.088233   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414
	I0729 17:18:10.088241   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:10.088252   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:10.088260   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:10.091479   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:10.287527   29751 request.go:629] Waited for 195.283397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:10.287593   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:10.287599   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:10.287607   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:10.287611   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:10.290769   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:10.291502   29751 pod_ready.go:92] pod "kube-apiserver-ha-900414" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:10.291522   29751 pod_ready.go:81] duration metric: took 350.680754ms for pod "kube-apiserver-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:10.291535   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:10.487521   29751 request.go:629] Waited for 195.924793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414-m02
	I0729 17:18:10.487588   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414-m02
	I0729 17:18:10.487615   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:10.487622   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:10.487627   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:10.492111   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:18:10.688171   29751 request.go:629] Waited for 195.330567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:10.688243   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:10.688250   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:10.688260   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:10.688268   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:10.691797   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:10.692474   29751 pod_ready.go:92] pod "kube-apiserver-ha-900414-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:10.692492   29751 pod_ready.go:81] duration metric: took 400.948997ms for pod "kube-apiserver-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:10.692507   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:10.887595   29751 request.go:629] Waited for 195.024359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414
	I0729 17:18:10.887652   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414
	I0729 17:18:10.887657   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:10.887665   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:10.887669   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:10.891054   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:11.088138   29751 request.go:629] Waited for 196.403846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:11.088238   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:11.088249   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:11.088265   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:11.088276   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:11.091389   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:11.091992   29751 pod_ready.go:92] pod "kube-controller-manager-ha-900414" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:11.092006   29751 pod_ready.go:81] duration metric: took 399.489771ms for pod "kube-controller-manager-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:11.092015   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:11.288104   29751 request.go:629] Waited for 196.035602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414-m02
	I0729 17:18:11.288172   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414-m02
	I0729 17:18:11.288179   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:11.288189   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:11.288200   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:11.293624   29751 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:18:11.487678   29751 request.go:629] Waited for 193.334234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:11.487740   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:11.487745   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:11.487753   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:11.487758   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:11.491175   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:11.491594   29751 pod_ready.go:92] pod "kube-controller-manager-ha-900414-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:11.491612   29751 pod_ready.go:81] duration metric: took 399.590285ms for pod "kube-controller-manager-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:11.491624   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bgq99" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:11.687816   29751 request.go:629] Waited for 196.119916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bgq99
	I0729 17:18:11.687890   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bgq99
	I0729 17:18:11.687896   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:11.687904   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:11.687907   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:11.691417   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:11.888300   29751 request.go:629] Waited for 196.368766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:11.888408   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:11.888420   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:11.888434   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:11.888446   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:11.891967   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:11.892484   29751 pod_ready.go:92] pod "kube-proxy-bgq99" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:11.892501   29751 pod_ready.go:81] duration metric: took 400.869993ms for pod "kube-proxy-bgq99" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:11.892510   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tng4t" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:12.087671   29751 request.go:629] Waited for 195.094842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tng4t
	I0729 17:18:12.087728   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tng4t
	I0729 17:18:12.087734   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:12.087741   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:12.087745   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:12.091271   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:12.287380   29751 request.go:629] Waited for 195.269885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:12.287443   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:12.287455   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:12.287471   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:12.287481   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:12.291276   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:12.291896   29751 pod_ready.go:92] pod "kube-proxy-tng4t" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:12.291920   29751 pod_ready.go:81] duration metric: took 399.402647ms for pod "kube-proxy-tng4t" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:12.291929   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:12.488009   29751 request.go:629] Waited for 196.00899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414
	I0729 17:18:12.488062   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414
	I0729 17:18:12.488067   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:12.488075   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:12.488078   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:12.491312   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:12.688180   29751 request.go:629] Waited for 196.383034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:12.688232   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:12.688237   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:12.688245   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:12.688248   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:12.696022   29751 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 17:18:12.696490   29751 pod_ready.go:92] pod "kube-scheduler-ha-900414" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:12.696507   29751 pod_ready.go:81] duration metric: took 404.57204ms for pod "kube-scheduler-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:12.696516   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:12.887592   29751 request.go:629] Waited for 190.996178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414-m02
	I0729 17:18:12.887648   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414-m02
	I0729 17:18:12.887654   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:12.887663   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:12.887668   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:12.890813   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:13.087839   29751 request.go:629] Waited for 196.380669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:13.087913   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:13.087925   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:13.087934   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:13.087942   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:13.091231   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:13.092267   29751 pod_ready.go:92] pod "kube-scheduler-ha-900414-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:13.092283   29751 pod_ready.go:81] duration metric: took 395.761219ms for pod "kube-scheduler-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:13.092294   29751 pod_ready.go:38] duration metric: took 3.201024864s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:18:13.092318   29751 api_server.go:52] waiting for apiserver process to appear ...
	I0729 17:18:13.092371   29751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:18:13.109831   29751 api_server.go:72] duration metric: took 17.618467467s to wait for apiserver process to appear ...
	I0729 17:18:13.109873   29751 api_server.go:88] waiting for apiserver healthz status ...
	I0729 17:18:13.109904   29751 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I0729 17:18:13.113869   29751 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I0729 17:18:13.113926   29751 round_trippers.go:463] GET https://192.168.39.114:8443/version
	I0729 17:18:13.113935   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:13.113944   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:13.113954   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:13.114730   29751 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 17:18:13.114838   29751 api_server.go:141] control plane version: v1.30.3
	I0729 17:18:13.114859   29751 api_server.go:131] duration metric: took 4.976083ms to wait for apiserver health ...
	I0729 17:18:13.114868   29751 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 17:18:13.288310   29751 request.go:629] Waited for 173.35802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:18:13.288380   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:18:13.288393   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:13.288407   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:13.288419   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:13.294660   29751 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 17:18:13.299447   29751 system_pods.go:59] 17 kube-system pods found
	I0729 17:18:13.299478   29751 system_pods.go:61] "coredns-7db6d8ff4d-48j6w" [306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d] Running
	I0729 17:18:13.299485   29751 system_pods.go:61] "coredns-7db6d8ff4d-9r87x" [fcc4709f-f07b-4694-a352-aedd9c67bbb2] Running
	I0729 17:18:13.299493   29751 system_pods.go:61] "etcd-ha-900414" [96243a16-1b51-4136-bc25-f3a0da2f7500] Running
	I0729 17:18:13.299503   29751 system_pods.go:61] "etcd-ha-900414-m02" [29c61208-cebd-4d6b-addf-426efcc78899] Running
	I0729 17:18:13.299508   29751 system_pods.go:61] "kindnet-kdzhk" [d86b52ee-7d4c-4530-afa1-88cf8ad77379] Running
	I0729 17:18:13.299513   29751 system_pods.go:61] "kindnet-z9cvz" [c2177daa-4efb-478c-845f-f30e77e91684] Running
	I0729 17:18:13.299519   29751 system_pods.go:61] "kube-apiserver-ha-900414" [2a4045e8-a900-4ebd-b36e-95083ab251c9] Running
	I0729 17:18:13.299523   29751 system_pods.go:61] "kube-apiserver-ha-900414-m02" [28c2e5cf-876b-4b77-b9c7-406642dc4df6] Running
	I0729 17:18:13.299527   29751 system_pods.go:61] "kube-controller-manager-ha-900414" [62bb9ded-db08-49a0-aea4-8806d0e8d294] Running
	I0729 17:18:13.299530   29751 system_pods.go:61] "kube-controller-manager-ha-900414-m02" [88418c96-4611-4276-91c6-ae9b67d4ae74] Running
	I0729 17:18:13.299533   29751 system_pods.go:61] "kube-proxy-bgq99" [0258cc44-f6ff-4294-a621-61b172247e15] Running
	I0729 17:18:13.299536   29751 system_pods.go:61] "kube-proxy-tng4t" [2303269f-50d3-4a63-aa76-891f001e6f5d] Running
	I0729 17:18:13.299539   29751 system_pods.go:61] "kube-scheduler-ha-900414" [3d41b818-c8ad-4dbb-bc7b-73f578d33539] Running
	I0729 17:18:13.299542   29751 system_pods.go:61] "kube-scheduler-ha-900414-m02" [f9cc318d-be18-4858-9712-b92f11027b65] Running
	I0729 17:18:13.299545   29751 system_pods.go:61] "kube-vip-ha-900414" [bf3918b4-6cc5-499b-808e-b6c33138cae2] Running
	I0729 17:18:13.299548   29751 system_pods.go:61] "kube-vip-ha-900414-m02" [9fad8ffb-6d3c-44ba-9700-e0e4d70a5f71] Running
	I0729 17:18:13.299551   29751 system_pods.go:61] "storage-provisioner" [50fa96e8-1ee5-4e09-a734-802dbcd02bcc] Running
	I0729 17:18:13.299557   29751 system_pods.go:74] duration metric: took 184.679927ms to wait for pod list to return data ...
	I0729 17:18:13.299567   29751 default_sa.go:34] waiting for default service account to be created ...
	I0729 17:18:13.488051   29751 request.go:629] Waited for 188.41705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I0729 17:18:13.488136   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I0729 17:18:13.488150   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:13.488159   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:13.488167   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:13.491332   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:13.491567   29751 default_sa.go:45] found service account: "default"
	I0729 17:18:13.491583   29751 default_sa.go:55] duration metric: took 192.010397ms for default service account to be created ...
	I0729 17:18:13.491592   29751 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 17:18:13.688073   29751 request.go:629] Waited for 196.406178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:18:13.688138   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:18:13.688145   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:13.688155   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:13.688160   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:13.693426   29751 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:18:13.698826   29751 system_pods.go:86] 17 kube-system pods found
	I0729 17:18:13.698849   29751 system_pods.go:89] "coredns-7db6d8ff4d-48j6w" [306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d] Running
	I0729 17:18:13.698855   29751 system_pods.go:89] "coredns-7db6d8ff4d-9r87x" [fcc4709f-f07b-4694-a352-aedd9c67bbb2] Running
	I0729 17:18:13.698859   29751 system_pods.go:89] "etcd-ha-900414" [96243a16-1b51-4136-bc25-f3a0da2f7500] Running
	I0729 17:18:13.698864   29751 system_pods.go:89] "etcd-ha-900414-m02" [29c61208-cebd-4d6b-addf-426efcc78899] Running
	I0729 17:18:13.698868   29751 system_pods.go:89] "kindnet-kdzhk" [d86b52ee-7d4c-4530-afa1-88cf8ad77379] Running
	I0729 17:18:13.698873   29751 system_pods.go:89] "kindnet-z9cvz" [c2177daa-4efb-478c-845f-f30e77e91684] Running
	I0729 17:18:13.698877   29751 system_pods.go:89] "kube-apiserver-ha-900414" [2a4045e8-a900-4ebd-b36e-95083ab251c9] Running
	I0729 17:18:13.698881   29751 system_pods.go:89] "kube-apiserver-ha-900414-m02" [28c2e5cf-876b-4b77-b9c7-406642dc4df6] Running
	I0729 17:18:13.698886   29751 system_pods.go:89] "kube-controller-manager-ha-900414" [62bb9ded-db08-49a0-aea4-8806d0e8d294] Running
	I0729 17:18:13.698891   29751 system_pods.go:89] "kube-controller-manager-ha-900414-m02" [88418c96-4611-4276-91c6-ae9b67d4ae74] Running
	I0729 17:18:13.698897   29751 system_pods.go:89] "kube-proxy-bgq99" [0258cc44-f6ff-4294-a621-61b172247e15] Running
	I0729 17:18:13.698902   29751 system_pods.go:89] "kube-proxy-tng4t" [2303269f-50d3-4a63-aa76-891f001e6f5d] Running
	I0729 17:18:13.698905   29751 system_pods.go:89] "kube-scheduler-ha-900414" [3d41b818-c8ad-4dbb-bc7b-73f578d33539] Running
	I0729 17:18:13.698909   29751 system_pods.go:89] "kube-scheduler-ha-900414-m02" [f9cc318d-be18-4858-9712-b92f11027b65] Running
	I0729 17:18:13.698913   29751 system_pods.go:89] "kube-vip-ha-900414" [bf3918b4-6cc5-499b-808e-b6c33138cae2] Running
	I0729 17:18:13.698917   29751 system_pods.go:89] "kube-vip-ha-900414-m02" [9fad8ffb-6d3c-44ba-9700-e0e4d70a5f71] Running
	I0729 17:18:13.698920   29751 system_pods.go:89] "storage-provisioner" [50fa96e8-1ee5-4e09-a734-802dbcd02bcc] Running
	I0729 17:18:13.698927   29751 system_pods.go:126] duration metric: took 207.325939ms to wait for k8s-apps to be running ...
	I0729 17:18:13.698942   29751 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 17:18:13.698986   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:18:13.714277   29751 system_svc.go:56] duration metric: took 15.328082ms WaitForService to wait for kubelet
	I0729 17:18:13.714304   29751 kubeadm.go:582] duration metric: took 18.222944304s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:18:13.714324   29751 node_conditions.go:102] verifying NodePressure condition ...
	I0729 17:18:13.887756   29751 request.go:629] Waited for 173.332508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes
	I0729 17:18:13.887809   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes
	I0729 17:18:13.887814   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:13.887821   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:13.887825   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:13.891192   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:13.891807   29751 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:18:13.891827   29751 node_conditions.go:123] node cpu capacity is 2
	I0729 17:18:13.891844   29751 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:18:13.891847   29751 node_conditions.go:123] node cpu capacity is 2
	I0729 17:18:13.891851   29751 node_conditions.go:105] duration metric: took 177.523205ms to run NodePressure ...
	I0729 17:18:13.891876   29751 start.go:241] waiting for startup goroutines ...
	I0729 17:18:13.891900   29751 start.go:255] writing updated cluster config ...
	I0729 17:18:13.893849   29751 out.go:177] 
	I0729 17:18:13.895505   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:18:13.895594   29751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:18:13.897076   29751 out.go:177] * Starting "ha-900414-m03" control-plane node in "ha-900414" cluster
	I0729 17:18:13.898077   29751 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:18:13.898093   29751 cache.go:56] Caching tarball of preloaded images
	I0729 17:18:13.898193   29751 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:18:13.898205   29751 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:18:13.898279   29751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:18:13.898447   29751 start.go:360] acquireMachinesLock for ha-900414-m03: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:18:13.898489   29751 start.go:364] duration metric: took 22.948µs to acquireMachinesLock for "ha-900414-m03"
	I0729 17:18:13.898503   29751 start.go:93] Provisioning new machine with config: &{Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:18:13.898590   29751 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 17:18:13.899887   29751 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:18:13.899984   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:18:13.900018   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:18:13.915789   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I0729 17:18:13.916164   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:18:13.916591   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:18:13.916616   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:18:13.916890   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:18:13.917032   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetMachineName
	I0729 17:18:13.917169   29751 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:18:13.917313   29751 start.go:159] libmachine.API.Create for "ha-900414" (driver="kvm2")
	I0729 17:18:13.917336   29751 client.go:168] LocalClient.Create starting
	I0729 17:18:13.917366   29751 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem
	I0729 17:18:13.917402   29751 main.go:141] libmachine: Decoding PEM data...
	I0729 17:18:13.917421   29751 main.go:141] libmachine: Parsing certificate...
	I0729 17:18:13.917486   29751 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem
	I0729 17:18:13.917516   29751 main.go:141] libmachine: Decoding PEM data...
	I0729 17:18:13.917534   29751 main.go:141] libmachine: Parsing certificate...
	I0729 17:18:13.917559   29751 main.go:141] libmachine: Running pre-create checks...
	I0729 17:18:13.917568   29751 main.go:141] libmachine: (ha-900414-m03) Calling .PreCreateCheck
	I0729 17:18:13.917752   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetConfigRaw
	I0729 17:18:13.918086   29751 main.go:141] libmachine: Creating machine...
	I0729 17:18:13.918102   29751 main.go:141] libmachine: (ha-900414-m03) Calling .Create
	I0729 17:18:13.918221   29751 main.go:141] libmachine: (ha-900414-m03) Creating KVM machine...
	I0729 17:18:13.919564   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found existing default KVM network
	I0729 17:18:13.919766   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found existing private KVM network mk-ha-900414
	I0729 17:18:13.919919   29751 main.go:141] libmachine: (ha-900414-m03) Setting up store path in /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03 ...
	I0729 17:18:13.919939   29751 main.go:141] libmachine: (ha-900414-m03) Building disk image from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 17:18:13.920041   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:13.919918   30991 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:18:13.920084   29751 main.go:141] libmachine: (ha-900414-m03) Downloading /home/jenkins/minikube-integration/19345-11206/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 17:18:14.156338   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:14.156236   30991 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa...
	I0729 17:18:14.216469   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:14.216360   30991 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/ha-900414-m03.rawdisk...
	I0729 17:18:14.216498   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Writing magic tar header
	I0729 17:18:14.216512   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Writing SSH key tar header
	I0729 17:18:14.216586   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:14.216530   30991 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03 ...
	I0729 17:18:14.216703   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03
	I0729 17:18:14.216731   29751 main.go:141] libmachine: (ha-900414-m03) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03 (perms=drwx------)
	I0729 17:18:14.216743   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines
	I0729 17:18:14.216757   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:18:14.216767   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206
	I0729 17:18:14.216780   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 17:18:14.216788   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 17:18:14.216797   29751 main.go:141] libmachine: (ha-900414-m03) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines (perms=drwxr-xr-x)
	I0729 17:18:14.216810   29751 main.go:141] libmachine: (ha-900414-m03) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube (perms=drwxr-xr-x)
	I0729 17:18:14.216824   29751 main.go:141] libmachine: (ha-900414-m03) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206 (perms=drwxrwxr-x)
	I0729 17:18:14.216836   29751 main.go:141] libmachine: (ha-900414-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 17:18:14.216848   29751 main.go:141] libmachine: (ha-900414-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 17:18:14.216866   29751 main.go:141] libmachine: (ha-900414-m03) Creating domain...
	I0729 17:18:14.216887   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Checking permissions on dir: /home
	I0729 17:18:14.216916   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Skipping /home - not owner
	I0729 17:18:14.217981   29751 main.go:141] libmachine: (ha-900414-m03) define libvirt domain using xml: 
	I0729 17:18:14.218003   29751 main.go:141] libmachine: (ha-900414-m03) <domain type='kvm'>
	I0729 17:18:14.218019   29751 main.go:141] libmachine: (ha-900414-m03)   <name>ha-900414-m03</name>
	I0729 17:18:14.218027   29751 main.go:141] libmachine: (ha-900414-m03)   <memory unit='MiB'>2200</memory>
	I0729 17:18:14.218036   29751 main.go:141] libmachine: (ha-900414-m03)   <vcpu>2</vcpu>
	I0729 17:18:14.218043   29751 main.go:141] libmachine: (ha-900414-m03)   <features>
	I0729 17:18:14.218054   29751 main.go:141] libmachine: (ha-900414-m03)     <acpi/>
	I0729 17:18:14.218060   29751 main.go:141] libmachine: (ha-900414-m03)     <apic/>
	I0729 17:18:14.218068   29751 main.go:141] libmachine: (ha-900414-m03)     <pae/>
	I0729 17:18:14.218078   29751 main.go:141] libmachine: (ha-900414-m03)     
	I0729 17:18:14.218086   29751 main.go:141] libmachine: (ha-900414-m03)   </features>
	I0729 17:18:14.218094   29751 main.go:141] libmachine: (ha-900414-m03)   <cpu mode='host-passthrough'>
	I0729 17:18:14.218103   29751 main.go:141] libmachine: (ha-900414-m03)   
	I0729 17:18:14.218112   29751 main.go:141] libmachine: (ha-900414-m03)   </cpu>
	I0729 17:18:14.218122   29751 main.go:141] libmachine: (ha-900414-m03)   <os>
	I0729 17:18:14.218134   29751 main.go:141] libmachine: (ha-900414-m03)     <type>hvm</type>
	I0729 17:18:14.218144   29751 main.go:141] libmachine: (ha-900414-m03)     <boot dev='cdrom'/>
	I0729 17:18:14.218160   29751 main.go:141] libmachine: (ha-900414-m03)     <boot dev='hd'/>
	I0729 17:18:14.218170   29751 main.go:141] libmachine: (ha-900414-m03)     <bootmenu enable='no'/>
	I0729 17:18:14.218192   29751 main.go:141] libmachine: (ha-900414-m03)   </os>
	I0729 17:18:14.218223   29751 main.go:141] libmachine: (ha-900414-m03)   <devices>
	I0729 17:18:14.218243   29751 main.go:141] libmachine: (ha-900414-m03)     <disk type='file' device='cdrom'>
	I0729 17:18:14.218263   29751 main.go:141] libmachine: (ha-900414-m03)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/boot2docker.iso'/>
	I0729 17:18:14.218277   29751 main.go:141] libmachine: (ha-900414-m03)       <target dev='hdc' bus='scsi'/>
	I0729 17:18:14.218288   29751 main.go:141] libmachine: (ha-900414-m03)       <readonly/>
	I0729 17:18:14.218299   29751 main.go:141] libmachine: (ha-900414-m03)     </disk>
	I0729 17:18:14.218310   29751 main.go:141] libmachine: (ha-900414-m03)     <disk type='file' device='disk'>
	I0729 17:18:14.218325   29751 main.go:141] libmachine: (ha-900414-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 17:18:14.218340   29751 main.go:141] libmachine: (ha-900414-m03)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/ha-900414-m03.rawdisk'/>
	I0729 17:18:14.218354   29751 main.go:141] libmachine: (ha-900414-m03)       <target dev='hda' bus='virtio'/>
	I0729 17:18:14.218382   29751 main.go:141] libmachine: (ha-900414-m03)     </disk>
	I0729 17:18:14.218397   29751 main.go:141] libmachine: (ha-900414-m03)     <interface type='network'>
	I0729 17:18:14.218405   29751 main.go:141] libmachine: (ha-900414-m03)       <source network='mk-ha-900414'/>
	I0729 17:18:14.218418   29751 main.go:141] libmachine: (ha-900414-m03)       <model type='virtio'/>
	I0729 17:18:14.218426   29751 main.go:141] libmachine: (ha-900414-m03)     </interface>
	I0729 17:18:14.218435   29751 main.go:141] libmachine: (ha-900414-m03)     <interface type='network'>
	I0729 17:18:14.218449   29751 main.go:141] libmachine: (ha-900414-m03)       <source network='default'/>
	I0729 17:18:14.218463   29751 main.go:141] libmachine: (ha-900414-m03)       <model type='virtio'/>
	I0729 17:18:14.218476   29751 main.go:141] libmachine: (ha-900414-m03)     </interface>
	I0729 17:18:14.218487   29751 main.go:141] libmachine: (ha-900414-m03)     <serial type='pty'>
	I0729 17:18:14.218494   29751 main.go:141] libmachine: (ha-900414-m03)       <target port='0'/>
	I0729 17:18:14.218506   29751 main.go:141] libmachine: (ha-900414-m03)     </serial>
	I0729 17:18:14.218512   29751 main.go:141] libmachine: (ha-900414-m03)     <console type='pty'>
	I0729 17:18:14.218523   29751 main.go:141] libmachine: (ha-900414-m03)       <target type='serial' port='0'/>
	I0729 17:18:14.218531   29751 main.go:141] libmachine: (ha-900414-m03)     </console>
	I0729 17:18:14.218543   29751 main.go:141] libmachine: (ha-900414-m03)     <rng model='virtio'>
	I0729 17:18:14.218558   29751 main.go:141] libmachine: (ha-900414-m03)       <backend model='random'>/dev/random</backend>
	I0729 17:18:14.218566   29751 main.go:141] libmachine: (ha-900414-m03)     </rng>
	I0729 17:18:14.218575   29751 main.go:141] libmachine: (ha-900414-m03)     
	I0729 17:18:14.218582   29751 main.go:141] libmachine: (ha-900414-m03)     
	I0729 17:18:14.218593   29751 main.go:141] libmachine: (ha-900414-m03)   </devices>
	I0729 17:18:14.218604   29751 main.go:141] libmachine: (ha-900414-m03) </domain>
	I0729 17:18:14.218615   29751 main.go:141] libmachine: (ha-900414-m03) 
	I0729 17:18:14.225148   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:b1:6e:1c in network default
	I0729 17:18:14.225743   29751 main.go:141] libmachine: (ha-900414-m03) Ensuring networks are active...
	I0729 17:18:14.225762   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:14.226526   29751 main.go:141] libmachine: (ha-900414-m03) Ensuring network default is active
	I0729 17:18:14.226842   29751 main.go:141] libmachine: (ha-900414-m03) Ensuring network mk-ha-900414 is active
	I0729 17:18:14.227197   29751 main.go:141] libmachine: (ha-900414-m03) Getting domain xml...
	I0729 17:18:14.228032   29751 main.go:141] libmachine: (ha-900414-m03) Creating domain...
	I0729 17:18:15.454164   29751 main.go:141] libmachine: (ha-900414-m03) Waiting to get IP...
	I0729 17:18:15.455018   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:15.455501   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:15.455559   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:15.455499   30991 retry.go:31] will retry after 246.816517ms: waiting for machine to come up
	I0729 17:18:15.703907   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:15.704365   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:15.704392   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:15.704314   30991 retry.go:31] will retry after 245.373334ms: waiting for machine to come up
	I0729 17:18:15.951830   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:15.952257   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:15.952280   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:15.952232   30991 retry.go:31] will retry after 485.466801ms: waiting for machine to come up
	I0729 17:18:16.439601   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:16.440052   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:16.440079   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:16.440003   30991 retry.go:31] will retry after 473.462646ms: waiting for machine to come up
	I0729 17:18:16.914497   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:16.914866   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:16.914891   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:16.914828   30991 retry.go:31] will retry after 726.597775ms: waiting for machine to come up
	I0729 17:18:17.642694   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:17.643183   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:17.643212   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:17.643131   30991 retry.go:31] will retry after 629.97819ms: waiting for machine to come up
	I0729 17:18:18.274868   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:18.275362   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:18.275383   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:18.275319   30991 retry.go:31] will retry after 1.120227935s: waiting for machine to come up
	I0729 17:18:19.397310   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:19.397890   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:19.397915   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:19.397832   30991 retry.go:31] will retry after 1.410249374s: waiting for machine to come up
	I0729 17:18:20.810390   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:20.810770   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:20.810792   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:20.810719   30991 retry.go:31] will retry after 1.713663054s: waiting for machine to come up
	I0729 17:18:22.526050   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:22.526512   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:22.526539   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:22.526467   30991 retry.go:31] will retry after 1.966005335s: waiting for machine to come up
	I0729 17:18:24.494120   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:24.494550   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:24.494576   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:24.494501   30991 retry.go:31] will retry after 1.93915854s: waiting for machine to come up
	I0729 17:18:26.435501   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:26.435943   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:26.435970   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:26.435906   30991 retry.go:31] will retry after 3.220477941s: waiting for machine to come up
	I0729 17:18:29.658111   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:29.658606   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:29.658624   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:29.658599   30991 retry.go:31] will retry after 3.433937188s: waiting for machine to come up
	I0729 17:18:33.093711   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:33.094160   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:33.094187   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:33.094117   30991 retry.go:31] will retry after 5.222497284s: waiting for machine to come up
	I0729 17:18:38.319384   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.319856   29751 main.go:141] libmachine: (ha-900414-m03) Found IP for machine: 192.168.39.6
	I0729 17:18:38.319876   29751 main.go:141] libmachine: (ha-900414-m03) Reserving static IP address...
	I0729 17:18:38.319885   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has current primary IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.320272   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find host DHCP lease matching {name: "ha-900414-m03", mac: "52:54:00:df:ef:4e", ip: "192.168.39.6"} in network mk-ha-900414
	I0729 17:18:38.391778   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Getting to WaitForSSH function...
	I0729 17:18:38.391811   29751 main.go:141] libmachine: (ha-900414-m03) Reserved static IP address: 192.168.39.6
	I0729 17:18:38.391857   29751 main.go:141] libmachine: (ha-900414-m03) Waiting for SSH to be available...
	I0729 17:18:38.394804   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.395386   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:minikube Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:38.395424   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.395631   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Using SSH client type: external
	I0729 17:18:38.395650   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa (-rw-------)
	I0729 17:18:38.395684   29751 main.go:141] libmachine: (ha-900414-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:18:38.395694   29751 main.go:141] libmachine: (ha-900414-m03) DBG | About to run SSH command:
	I0729 17:18:38.395710   29751 main.go:141] libmachine: (ha-900414-m03) DBG | exit 0
	I0729 17:18:38.522593   29751 main.go:141] libmachine: (ha-900414-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 17:18:38.522888   29751 main.go:141] libmachine: (ha-900414-m03) KVM machine creation complete!
	I0729 17:18:38.523242   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetConfigRaw
	I0729 17:18:38.523691   29751 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:18:38.523865   29751 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:18:38.524008   29751 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 17:18:38.524022   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetState
	I0729 17:18:38.525265   29751 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 17:18:38.525279   29751 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 17:18:38.525296   29751 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 17:18:38.525305   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:38.527540   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.527956   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:38.527986   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.528120   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:38.528302   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:38.528441   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:38.528562   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:38.528701   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:18:38.528901   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0729 17:18:38.528912   29751 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 17:18:38.645896   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:18:38.645919   29751 main.go:141] libmachine: Detecting the provisioner...
	I0729 17:18:38.645927   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:38.648526   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.648855   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:38.648896   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.649028   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:38.649220   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:38.649396   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:38.649515   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:38.649636   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:18:38.649784   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0729 17:18:38.649793   29751 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 17:18:38.755214   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 17:18:38.755271   29751 main.go:141] libmachine: found compatible host: buildroot
	I0729 17:18:38.755278   29751 main.go:141] libmachine: Provisioning with buildroot...
	I0729 17:18:38.755285   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetMachineName
	I0729 17:18:38.755503   29751 buildroot.go:166] provisioning hostname "ha-900414-m03"
	I0729 17:18:38.755531   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetMachineName
	I0729 17:18:38.755718   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:38.758316   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.758703   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:38.758733   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.758836   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:38.758985   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:38.759144   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:38.759277   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:38.759433   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:18:38.759575   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0729 17:18:38.759586   29751 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-900414-m03 && echo "ha-900414-m03" | sudo tee /etc/hostname
	I0729 17:18:38.882446   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-900414-m03
	
	I0729 17:18:38.882477   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:38.885220   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.885619   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:38.885644   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.885838   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:38.886006   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:38.886159   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:38.886286   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:38.886465   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:18:38.886616   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0729 17:18:38.886632   29751 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-900414-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-900414-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-900414-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:18:39.005370   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:18:39.005402   29751 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 17:18:39.005421   29751 buildroot.go:174] setting up certificates
	I0729 17:18:39.005431   29751 provision.go:84] configureAuth start
	I0729 17:18:39.005447   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetMachineName
	I0729 17:18:39.005732   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:18:39.008861   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.009335   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.009365   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.009509   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:39.011833   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.012220   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.012251   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.012398   29751 provision.go:143] copyHostCerts
	I0729 17:18:39.012428   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:18:39.012475   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 17:18:39.012486   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:18:39.012572   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 17:18:39.012674   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:18:39.012700   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 17:18:39.012709   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:18:39.012739   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 17:18:39.012792   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:18:39.012814   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 17:18:39.012822   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:18:39.012858   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 17:18:39.012928   29751 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.ha-900414-m03 san=[127.0.0.1 192.168.39.6 ha-900414-m03 localhost minikube]
	I0729 17:18:39.065377   29751 provision.go:177] copyRemoteCerts
	I0729 17:18:39.065440   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:18:39.065468   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:39.068586   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.068997   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.069018   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.069216   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:39.069424   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:39.069575   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:39.069708   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:18:39.156161   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:18:39.156219   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:18:39.180706   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:18:39.180791   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 17:18:39.206593   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:18:39.206659   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 17:18:39.233410   29751 provision.go:87] duration metric: took 227.965466ms to configureAuth
	I0729 17:18:39.233435   29751 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:18:39.233657   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:18:39.233753   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:39.236299   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.236654   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.236682   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.236826   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:39.237048   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:39.237234   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:39.237392   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:39.237560   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:18:39.237724   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0729 17:18:39.237737   29751 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:18:39.507521   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:18:39.507556   29751 main.go:141] libmachine: Checking connection to Docker...
	I0729 17:18:39.507566   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetURL
	I0729 17:18:39.508893   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Using libvirt version 6000000
	I0729 17:18:39.511229   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.511654   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.511673   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.511853   29751 main.go:141] libmachine: Docker is up and running!
	I0729 17:18:39.511866   29751 main.go:141] libmachine: Reticulating splines...
	I0729 17:18:39.511874   29751 client.go:171] duration metric: took 25.594531169s to LocalClient.Create
	I0729 17:18:39.511916   29751 start.go:167] duration metric: took 25.594604458s to libmachine.API.Create "ha-900414"
	I0729 17:18:39.511927   29751 start.go:293] postStartSetup for "ha-900414-m03" (driver="kvm2")
	I0729 17:18:39.511935   29751 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:18:39.511950   29751 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:18:39.512166   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:18:39.512189   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:39.514637   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.514981   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.515004   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.515082   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:39.515268   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:39.515394   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:39.515512   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:18:39.600547   29751 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:18:39.604970   29751 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:18:39.604998   29751 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 17:18:39.605058   29751 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 17:18:39.605127   29751 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 17:18:39.605136   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /etc/ssl/certs/183932.pem
	I0729 17:18:39.605218   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:18:39.614337   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:18:39.639317   29751 start.go:296] duration metric: took 127.361162ms for postStartSetup
	I0729 17:18:39.639378   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetConfigRaw
	I0729 17:18:39.640029   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:18:39.642790   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.643146   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.643181   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.643470   29751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:18:39.643786   29751 start.go:128] duration metric: took 25.745185719s to createHost
	I0729 17:18:39.643812   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:39.646065   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.646471   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.646490   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.646764   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:39.646928   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:39.647019   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:39.647184   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:39.647361   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:18:39.647546   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0729 17:18:39.647560   29751 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:18:39.755200   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722273519.732548551
	
	I0729 17:18:39.755229   29751 fix.go:216] guest clock: 1722273519.732548551
	I0729 17:18:39.755235   29751 fix.go:229] Guest: 2024-07-29 17:18:39.732548551 +0000 UTC Remote: 2024-07-29 17:18:39.643800136 +0000 UTC m=+160.000060021 (delta=88.748415ms)
	I0729 17:18:39.755253   29751 fix.go:200] guest clock delta is within tolerance: 88.748415ms
	I0729 17:18:39.755258   29751 start.go:83] releasing machines lock for "ha-900414-m03", held for 25.856762836s
	I0729 17:18:39.755277   29751 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:18:39.755513   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:18:39.758889   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.759556   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.759585   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.761549   29751 out.go:177] * Found network options:
	I0729 17:18:39.762906   29751 out.go:177]   - NO_PROXY=192.168.39.114,192.168.39.111
	W0729 17:18:39.764111   29751 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 17:18:39.764131   29751 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 17:18:39.764158   29751 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:18:39.764706   29751 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:18:39.764888   29751 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:18:39.764989   29751 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:18:39.765028   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	W0729 17:18:39.765084   29751 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 17:18:39.765101   29751 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 17:18:39.765157   29751 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:18:39.765171   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:39.767982   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.768326   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.768368   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.768394   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.768541   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:39.768714   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:39.768809   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.768827   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.768901   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:39.768974   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:39.769048   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:18:39.769119   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:39.769248   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:39.769396   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:18:40.008382   29751 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:18:40.015691   29751 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:18:40.015761   29751 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:18:40.032671   29751 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 17:18:40.032692   29751 start.go:495] detecting cgroup driver to use...
	I0729 17:18:40.032762   29751 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:18:40.050414   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:18:40.066938   29751 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:18:40.066991   29751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:18:40.081494   29751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:18:40.095961   29751 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:18:40.222640   29751 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:18:40.360965   29751 docker.go:233] disabling docker service ...
	I0729 17:18:40.361045   29751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:18:40.375633   29751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:18:40.388273   29751 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:18:40.532840   29751 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:18:40.676072   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:18:40.689785   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:18:40.709089   29751 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:18:40.709150   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:18:40.719494   29751 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:18:40.719560   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:18:40.730041   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:18:40.740211   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:18:40.750185   29751 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:18:40.760826   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:18:40.771677   29751 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:18:40.788399   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:18:40.798349   29751 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:18:40.807516   29751 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 17:18:40.807575   29751 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 17:18:40.821127   29751 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:18:40.830609   29751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:18:40.946720   29751 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:18:41.086008   29751 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:18:41.086073   29751 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:18:41.090668   29751 start.go:563] Will wait 60s for crictl version
	I0729 17:18:41.090720   29751 ssh_runner.go:195] Run: which crictl
	I0729 17:18:41.094290   29751 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:18:41.141366   29751 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:18:41.141452   29751 ssh_runner.go:195] Run: crio --version
	I0729 17:18:41.169254   29751 ssh_runner.go:195] Run: crio --version
	I0729 17:18:41.198516   29751 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:18:41.199921   29751 out.go:177]   - env NO_PROXY=192.168.39.114
	I0729 17:18:41.201078   29751 out.go:177]   - env NO_PROXY=192.168.39.114,192.168.39.111
	I0729 17:18:41.202122   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:18:41.204737   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:41.205120   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:41.205146   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:41.205306   29751 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:18:41.209344   29751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:18:41.221916   29751 mustload.go:65] Loading cluster: ha-900414
	I0729 17:18:41.222106   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:18:41.222385   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:18:41.222430   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:18:41.237599   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32921
	I0729 17:18:41.238105   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:18:41.238665   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:18:41.238682   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:18:41.238992   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:18:41.239156   29751 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:18:41.240467   29751 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:18:41.240786   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:18:41.240824   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:18:41.254764   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42031
	I0729 17:18:41.255100   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:18:41.255454   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:18:41.255468   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:18:41.255732   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:18:41.255891   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:18:41.256046   29751 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414 for IP: 192.168.39.6
	I0729 17:18:41.256059   29751 certs.go:194] generating shared ca certs ...
	I0729 17:18:41.256075   29751 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:18:41.256213   29751 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 17:18:41.256263   29751 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 17:18:41.256279   29751 certs.go:256] generating profile certs ...
	I0729 17:18:41.256375   29751 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key
	I0729 17:18:41.256425   29751 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.826d1828
	I0729 17:18:41.256446   29751 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.826d1828 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.111 192.168.39.6 192.168.39.254]
	I0729 17:18:41.489384   29751 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.826d1828 ...
	I0729 17:18:41.489413   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.826d1828: {Name:mk943bd45e2a4e4e4c4affd69e2cd693563da4e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:18:41.489592   29751 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.826d1828 ...
	I0729 17:18:41.489611   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.826d1828: {Name:mk5e48b8f3e65218b7961a6917dda810634f838b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:18:41.489706   29751 certs.go:381] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.826d1828 -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt
	I0729 17:18:41.489844   29751 certs.go:385] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.826d1828 -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key
	I0729 17:18:41.489962   29751 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key
	I0729 17:18:41.489975   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 17:18:41.489987   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 17:18:41.490000   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 17:18:41.490012   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 17:18:41.490031   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 17:18:41.490053   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 17:18:41.490098   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 17:18:41.490118   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 17:18:41.490179   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 17:18:41.490206   29751 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 17:18:41.490215   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:18:41.490235   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:18:41.490259   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:18:41.490282   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 17:18:41.490318   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:18:41.490342   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:18:41.490357   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem -> /usr/share/ca-certificates/18393.pem
	I0729 17:18:41.490393   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /usr/share/ca-certificates/183932.pem
	I0729 17:18:41.490437   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:18:41.493337   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:18:41.493728   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:18:41.493768   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:18:41.493929   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:18:41.494112   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:18:41.494260   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:18:41.494391   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:18:41.574702   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 17:18:41.580327   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 17:18:41.594027   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 17:18:41.599399   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0729 17:18:41.611102   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 17:18:41.615269   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 17:18:41.627335   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 17:18:41.631697   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0729 17:18:41.643941   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 17:18:41.648561   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 17:18:41.659576   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 17:18:41.664123   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 17:18:41.675373   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:18:41.701044   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:18:41.725127   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:18:41.749289   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:18:41.780486   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 17:18:41.804847   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 17:18:41.830878   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:18:41.856997   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:18:41.885612   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:18:41.911375   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 17:18:41.935508   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 17:18:41.962258   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 17:18:41.978918   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0729 17:18:41.995475   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 17:18:42.015357   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0729 17:18:42.033472   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 17:18:42.051159   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 17:18:42.068362   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 17:18:42.086927   29751 ssh_runner.go:195] Run: openssl version
	I0729 17:18:42.092826   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 17:18:42.103736   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 17:18:42.108386   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 17:18:42.108437   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 17:18:42.114415   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:18:42.125122   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:18:42.135738   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:18:42.140371   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:18:42.140416   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:18:42.146023   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:18:42.157101   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 17:18:42.168377   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 17:18:42.173027   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 17:18:42.173078   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 17:18:42.178674   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 17:18:42.189395   29751 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:18:42.193772   29751 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 17:18:42.193828   29751 kubeadm.go:934] updating node {m03 192.168.39.6 8443 v1.30.3 crio true true} ...
	I0729 17:18:42.193936   29751 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-900414-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:18:42.193970   29751 kube-vip.go:115] generating kube-vip config ...
	I0729 17:18:42.194012   29751 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 17:18:42.212288   29751 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 17:18:42.212355   29751 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 17:18:42.212405   29751 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:18:42.223812   29751 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 17:18:42.223874   29751 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 17:18:42.233769   29751 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 17:18:42.233784   29751 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 17:18:42.233793   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 17:18:42.233813   29751 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 17:18:42.233825   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:18:42.233828   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 17:18:42.233880   29751 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 17:18:42.233899   29751 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 17:18:42.252670   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 17:18:42.252700   29751 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 17:18:42.252728   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 17:18:42.252765   29751 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 17:18:42.252820   29751 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 17:18:42.252845   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 17:18:42.278610   29751 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 17:18:42.278655   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 17:18:43.143466   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 17:18:43.153090   29751 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0729 17:18:43.169229   29751 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:18:43.185296   29751 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 17:18:43.201501   29751 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 17:18:43.205346   29751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:18:43.217546   29751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:18:43.348207   29751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:18:43.365606   29751 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:18:43.366044   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:18:43.366091   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:18:43.383436   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37583
	I0729 17:18:43.383836   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:18:43.384341   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:18:43.384365   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:18:43.384732   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:18:43.384911   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:18:43.385078   29751 start.go:317] joinCluster: &{Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:18:43.385216   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 17:18:43.385236   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:18:43.387931   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:18:43.388317   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:18:43.388347   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:18:43.388514   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:18:43.388672   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:18:43.388844   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:18:43.388972   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:18:43.548828   29751 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:18:43.548874   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cgee31.5r7eghabux47j74p --discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-900414-m03 --control-plane --apiserver-advertise-address=192.168.39.6 --apiserver-bind-port=8443"
	I0729 17:19:06.594553   29751 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cgee31.5r7eghabux47j74p --discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-900414-m03 --control-plane --apiserver-advertise-address=192.168.39.6 --apiserver-bind-port=8443": (23.045646537s)
	I0729 17:19:06.594588   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 17:19:07.239944   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-900414-m03 minikube.k8s.io/updated_at=2024_07_29T17_19_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=ha-900414 minikube.k8s.io/primary=false
	I0729 17:19:07.352742   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-900414-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 17:19:07.472219   29751 start.go:319] duration metric: took 24.087139049s to joinCluster
	I0729 17:19:07.472317   29751 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:19:07.472621   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:19:07.473775   29751 out.go:177] * Verifying Kubernetes components...
	I0729 17:19:07.475144   29751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:19:07.793820   29751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:19:07.849414   29751 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 17:19:07.849677   29751 kapi.go:59] client config for ha-900414: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.crt", KeyFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key", CAFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 17:19:07.849744   29751 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.114:8443
	I0729 17:19:07.849994   29751 node_ready.go:35] waiting up to 6m0s for node "ha-900414-m03" to be "Ready" ...
	I0729 17:19:07.850113   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:07.850123   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:07.850135   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:07.850141   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:07.853663   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:08.350487   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:08.350512   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:08.350524   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:08.350529   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:08.354556   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:08.850616   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:08.850636   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:08.850645   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:08.850648   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:08.854661   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:09.350522   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:09.350604   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:09.350618   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:09.350623   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:09.356116   29751 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:19:09.850579   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:09.850599   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:09.850607   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:09.850610   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:09.854987   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:09.855646   29751 node_ready.go:53] node "ha-900414-m03" has status "Ready":"False"
	I0729 17:19:10.350985   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:10.351007   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:10.351019   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:10.351025   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:10.354678   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:10.850514   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:10.850533   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:10.850541   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:10.850545   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:10.854110   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:11.351209   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:11.351249   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:11.351266   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:11.351271   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:11.354932   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:11.850823   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:11.850844   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:11.850852   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:11.850858   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:11.854984   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:12.350958   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:12.350989   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:12.351000   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:12.351007   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:12.354415   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:12.355130   29751 node_ready.go:53] node "ha-900414-m03" has status "Ready":"False"
	I0729 17:19:12.850262   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:12.850281   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:12.850289   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:12.850294   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:12.853509   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:13.350496   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:13.350516   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:13.350524   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:13.350529   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:13.353935   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:13.850728   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:13.850751   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:13.850759   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:13.850764   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:13.854749   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:14.350461   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:14.350480   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:14.350490   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:14.350494   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:14.354803   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:14.355421   29751 node_ready.go:53] node "ha-900414-m03" has status "Ready":"False"
	I0729 17:19:14.850187   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:14.850221   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:14.850231   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:14.850237   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:14.853494   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:15.350506   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:15.350526   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:15.350534   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:15.350541   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:15.353679   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:15.850900   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:15.850925   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:15.850935   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:15.850943   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:15.853670   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:16.351143   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:16.351165   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:16.351176   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:16.351181   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:16.354410   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:16.850396   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:16.850416   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:16.850425   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:16.850428   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:16.853683   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:16.854325   29751 node_ready.go:53] node "ha-900414-m03" has status "Ready":"False"
	I0729 17:19:17.350639   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:17.350658   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:17.350667   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:17.350672   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:17.354867   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:17.851013   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:17.851037   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:17.851049   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:17.851053   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:17.854718   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:18.350501   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:18.350520   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:18.350536   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:18.350541   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:18.353940   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:18.851009   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:18.851033   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:18.851045   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:18.851050   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:18.854509   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:18.855161   29751 node_ready.go:53] node "ha-900414-m03" has status "Ready":"False"
	I0729 17:19:19.350465   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:19.350483   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:19.350491   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:19.350495   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:19.353618   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:19.850345   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:19.850381   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:19.850393   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:19.850400   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:19.853469   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:20.351156   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:20.351182   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:20.351192   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:20.351199   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:20.355240   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:20.850380   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:20.850403   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:20.850411   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:20.850415   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:20.854135   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:21.351083   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:21.351108   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.351119   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.351124   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.355081   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:21.355742   29751 node_ready.go:53] node "ha-900414-m03" has status "Ready":"False"
	I0729 17:19:21.851203   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:21.851224   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.851231   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.851236   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.854194   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:21.854912   29751 node_ready.go:49] node "ha-900414-m03" has status "Ready":"True"
	I0729 17:19:21.854974   29751 node_ready.go:38] duration metric: took 14.004961019s for node "ha-900414-m03" to be "Ready" ...
	I0729 17:19:21.854990   29751 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:19:21.855079   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:19:21.855091   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.855102   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.855119   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.863482   29751 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 17:19:21.870015   29751 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-48j6w" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:21.870082   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-48j6w
	I0729 17:19:21.870090   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.870097   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.870101   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.873025   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:21.874008   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:21.874024   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.874030   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.874035   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.877376   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:21.878242   29751 pod_ready.go:92] pod "coredns-7db6d8ff4d-48j6w" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:21.878257   29751 pod_ready.go:81] duration metric: took 8.220998ms for pod "coredns-7db6d8ff4d-48j6w" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:21.878264   29751 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9r87x" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:21.878306   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9r87x
	I0729 17:19:21.878313   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.878320   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.878324   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.881497   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:21.882699   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:21.882712   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.882718   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.882721   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.885515   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:21.886266   29751 pod_ready.go:92] pod "coredns-7db6d8ff4d-9r87x" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:21.886281   29751 pod_ready.go:81] duration metric: took 8.011311ms for pod "coredns-7db6d8ff4d-9r87x" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:21.886288   29751 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:21.886328   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-900414
	I0729 17:19:21.886335   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.886342   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.886347   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.888538   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:21.888993   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:21.889005   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.889012   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.889016   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.891453   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:21.892242   29751 pod_ready.go:92] pod "etcd-ha-900414" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:21.892263   29751 pod_ready.go:81] duration metric: took 5.969237ms for pod "etcd-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:21.892285   29751 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:21.892339   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-900414-m02
	I0729 17:19:21.892348   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.892355   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.892359   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.895297   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:21.895969   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:21.895985   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.895995   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.896000   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.898796   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:21.899611   29751 pod_ready.go:92] pod "etcd-ha-900414-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:21.899625   29751 pod_ready.go:81] duration metric: took 7.333134ms for pod "etcd-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:21.899632   29751 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-900414-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:22.052019   29751 request.go:629] Waited for 152.334115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-900414-m03
	I0729 17:19:22.052078   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-900414-m03
	I0729 17:19:22.052094   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:22.052104   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:22.052108   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:22.055335   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:22.251244   29751 request.go:629] Waited for 195.251841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:22.251313   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:22.251324   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:22.251335   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:22.251345   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:22.255297   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:22.256267   29751 pod_ready.go:92] pod "etcd-ha-900414-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:22.256289   29751 pod_ready.go:81] duration metric: took 356.650571ms for pod "etcd-ha-900414-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:22.256312   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:22.451250   29751 request.go:629] Waited for 194.873541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414
	I0729 17:19:22.451372   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414
	I0729 17:19:22.451388   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:22.451398   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:22.451403   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:22.455235   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:22.652048   29751 request.go:629] Waited for 196.262816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:22.652095   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:22.652100   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:22.652114   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:22.652120   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:22.655573   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:22.656081   29751 pod_ready.go:92] pod "kube-apiserver-ha-900414" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:22.656100   29751 pod_ready.go:81] duration metric: took 399.776412ms for pod "kube-apiserver-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:22.656112   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:22.852262   29751 request.go:629] Waited for 196.068275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414-m02
	I0729 17:19:22.852321   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414-m02
	I0729 17:19:22.852328   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:22.852335   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:22.852341   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:22.855970   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:23.052127   29751 request.go:629] Waited for 195.362656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:23.052208   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:23.052216   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:23.052227   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:23.052235   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:23.055712   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:23.056126   29751 pod_ready.go:92] pod "kube-apiserver-ha-900414-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:23.056140   29751 pod_ready.go:81] duration metric: took 400.012328ms for pod "kube-apiserver-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:23.056149   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-900414-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:23.252272   29751 request.go:629] Waited for 196.048736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414-m03
	I0729 17:19:23.252349   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414-m03
	I0729 17:19:23.252355   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:23.252362   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:23.252367   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:23.256080   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:23.451956   29751 request.go:629] Waited for 195.269252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:23.452073   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:23.452085   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:23.452096   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:23.452108   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:23.457021   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:23.457679   29751 pod_ready.go:92] pod "kube-apiserver-ha-900414-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:23.457696   29751 pod_ready.go:81] duration metric: took 401.53635ms for pod "kube-apiserver-ha-900414-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:23.457715   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:23.651395   29751 request.go:629] Waited for 193.614796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414
	I0729 17:19:23.651468   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414
	I0729 17:19:23.651475   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:23.651484   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:23.651490   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:23.655606   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:23.851523   29751 request.go:629] Waited for 195.363742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:23.851571   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:23.851576   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:23.851585   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:23.851588   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:23.855252   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:23.855761   29751 pod_ready.go:92] pod "kube-controller-manager-ha-900414" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:23.855785   29751 pod_ready.go:81] duration metric: took 398.06379ms for pod "kube-controller-manager-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:23.855795   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:24.051238   29751 request.go:629] Waited for 195.386711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414-m02
	I0729 17:19:24.051305   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414-m02
	I0729 17:19:24.051311   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:24.051319   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:24.051324   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:24.054653   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:24.251795   29751 request.go:629] Waited for 196.363963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:24.251840   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:24.251847   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:24.251854   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:24.251860   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:24.255941   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:24.256531   29751 pod_ready.go:92] pod "kube-controller-manager-ha-900414-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:24.256552   29751 pod_ready.go:81] duration metric: took 400.750428ms for pod "kube-controller-manager-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:24.256562   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-900414-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:24.451894   29751 request.go:629] Waited for 195.26591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414-m03
	I0729 17:19:24.451968   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414-m03
	I0729 17:19:24.451979   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:24.451993   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:24.452004   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:24.455670   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:24.651694   29751 request.go:629] Waited for 195.361663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:24.651747   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:24.651754   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:24.651764   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:24.651773   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:24.654780   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:24.655240   29751 pod_ready.go:92] pod "kube-controller-manager-ha-900414-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:24.655304   29751 pod_ready.go:81] duration metric: took 398.730533ms for pod "kube-controller-manager-ha-900414-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:24.655323   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bgq99" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:24.851560   29751 request.go:629] Waited for 196.160756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bgq99
	I0729 17:19:24.851637   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bgq99
	I0729 17:19:24.851645   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:24.851654   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:24.851662   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:24.855588   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:25.051529   29751 request.go:629] Waited for 195.171844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:25.051604   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:25.051616   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:25.051627   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:25.051641   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:25.054958   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:25.055413   29751 pod_ready.go:92] pod "kube-proxy-bgq99" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:25.055431   29751 pod_ready.go:81] duration metric: took 400.102063ms for pod "kube-proxy-bgq99" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:25.055442   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tng4t" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:25.251534   29751 request.go:629] Waited for 196.01631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tng4t
	I0729 17:19:25.251602   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tng4t
	I0729 17:19:25.251607   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:25.251615   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:25.251619   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:25.254889   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:25.452044   29751 request.go:629] Waited for 196.352608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:25.452102   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:25.452158   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:25.452172   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:25.452182   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:25.455565   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:25.456093   29751 pod_ready.go:92] pod "kube-proxy-tng4t" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:25.456116   29751 pod_ready.go:81] duration metric: took 400.661421ms for pod "kube-proxy-tng4t" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:25.456125   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wnfsb" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:25.652157   29751 request.go:629] Waited for 195.96246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wnfsb
	I0729 17:19:25.652245   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wnfsb
	I0729 17:19:25.652256   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:25.652267   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:25.652276   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:25.655725   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:25.851517   29751 request.go:629] Waited for 195.149449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:25.851595   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:25.851606   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:25.851618   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:25.851628   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:25.854422   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:25.855240   29751 pod_ready.go:92] pod "kube-proxy-wnfsb" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:25.855262   29751 pod_ready.go:81] duration metric: took 399.130576ms for pod "kube-proxy-wnfsb" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:25.855275   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:26.051220   29751 request.go:629] Waited for 195.864245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414
	I0729 17:19:26.051288   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414
	I0729 17:19:26.051293   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:26.051302   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:26.051313   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:26.054646   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:26.251646   29751 request.go:629] Waited for 196.397199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:26.251718   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:26.251723   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:26.251732   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:26.251739   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:26.255079   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:26.255799   29751 pod_ready.go:92] pod "kube-scheduler-ha-900414" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:26.255826   29751 pod_ready.go:81] duration metric: took 400.542457ms for pod "kube-scheduler-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:26.255841   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:26.452159   29751 request.go:629] Waited for 196.243797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414-m02
	I0729 17:19:26.452214   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414-m02
	I0729 17:19:26.452219   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:26.452227   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:26.452232   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:26.456010   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:26.651980   29751 request.go:629] Waited for 195.349194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:26.652042   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:26.652050   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:26.652058   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:26.652061   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:26.655151   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:26.655867   29751 pod_ready.go:92] pod "kube-scheduler-ha-900414-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:26.655886   29751 pod_ready.go:81] duration metric: took 400.036515ms for pod "kube-scheduler-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:26.655895   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-900414-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:26.851900   29751 request.go:629] Waited for 195.949978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414-m03
	I0729 17:19:26.851987   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414-m03
	I0729 17:19:26.851998   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:26.852010   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:26.852019   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:26.855542   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:27.051535   29751 request.go:629] Waited for 195.337415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:27.051620   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:27.051626   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:27.051636   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:27.051642   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:27.055487   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:27.056183   29751 pod_ready.go:92] pod "kube-scheduler-ha-900414-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:27.056199   29751 pod_ready.go:81] duration metric: took 400.299217ms for pod "kube-scheduler-ha-900414-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:27.056210   29751 pod_ready.go:38] duration metric: took 5.201207309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:19:27.056225   29751 api_server.go:52] waiting for apiserver process to appear ...
	I0729 17:19:27.056269   29751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:19:27.073728   29751 api_server.go:72] duration metric: took 19.601372284s to wait for apiserver process to appear ...
	I0729 17:19:27.073746   29751 api_server.go:88] waiting for apiserver healthz status ...
	I0729 17:19:27.073763   29751 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I0729 17:19:27.077897   29751 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I0729 17:19:27.077950   29751 round_trippers.go:463] GET https://192.168.39.114:8443/version
	I0729 17:19:27.077957   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:27.077966   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:27.077972   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:27.078823   29751 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 17:19:27.078882   29751 api_server.go:141] control plane version: v1.30.3
	I0729 17:19:27.078899   29751 api_server.go:131] duration metric: took 5.145715ms to wait for apiserver health ...
	I0729 17:19:27.078908   29751 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 17:19:27.251235   29751 request.go:629] Waited for 172.262934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:19:27.251282   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:19:27.251287   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:27.251293   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:27.251297   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:27.258319   29751 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 17:19:27.264831   29751 system_pods.go:59] 24 kube-system pods found
	I0729 17:19:27.264865   29751 system_pods.go:61] "coredns-7db6d8ff4d-48j6w" [306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d] Running
	I0729 17:19:27.264874   29751 system_pods.go:61] "coredns-7db6d8ff4d-9r87x" [fcc4709f-f07b-4694-a352-aedd9c67bbb2] Running
	I0729 17:19:27.264880   29751 system_pods.go:61] "etcd-ha-900414" [96243a16-1b51-4136-bc25-f3a0da2f7500] Running
	I0729 17:19:27.264886   29751 system_pods.go:61] "etcd-ha-900414-m02" [29c61208-cebd-4d6b-addf-426efcc78899] Running
	I0729 17:19:27.264891   29751 system_pods.go:61] "etcd-ha-900414-m03" [67d8b9ed-d401-4de2-9ef8-c8295c488e29] Running
	I0729 17:19:27.264898   29751 system_pods.go:61] "kindnet-6vzd2" [396c742c-f9b6-4184-84db-7407ba419a86] Running
	I0729 17:19:27.264910   29751 system_pods.go:61] "kindnet-kdzhk" [d86b52ee-7d4c-4530-afa1-88cf8ad77379] Running
	I0729 17:19:27.264915   29751 system_pods.go:61] "kindnet-z9cvz" [c2177daa-4efb-478c-845f-f30e77e91684] Running
	I0729 17:19:27.264919   29751 system_pods.go:61] "kube-apiserver-ha-900414" [2a4045e8-a900-4ebd-b36e-95083ab251c9] Running
	I0729 17:19:27.264924   29751 system_pods.go:61] "kube-apiserver-ha-900414-m02" [28c2e5cf-876b-4b77-b9c7-406642dc4df6] Running
	I0729 17:19:27.264930   29751 system_pods.go:61] "kube-apiserver-ha-900414-m03" [2d5328fb-f6d2-4efc-ab72-0395e6500f21] Running
	I0729 17:19:27.264934   29751 system_pods.go:61] "kube-controller-manager-ha-900414" [62bb9ded-db08-49a0-aea4-8806d0e8d294] Running
	I0729 17:19:27.264939   29751 system_pods.go:61] "kube-controller-manager-ha-900414-m02" [88418c96-4611-4276-91c6-ae9b67d4ae74] Running
	I0729 17:19:27.264943   29751 system_pods.go:61] "kube-controller-manager-ha-900414-m03" [f8b5466c-1783-4f30-b3d1-f5034f7f52af] Running
	I0729 17:19:27.264948   29751 system_pods.go:61] "kube-proxy-bgq99" [0258cc44-f6ff-4294-a621-61b172247e15] Running
	I0729 17:19:27.264952   29751 system_pods.go:61] "kube-proxy-tng4t" [2303269f-50d3-4a63-aa76-891f001e6f5d] Running
	I0729 17:19:27.264957   29751 system_pods.go:61] "kube-proxy-wnfsb" [0322d88f-c31b-4cc7-b073-2f97ab9e047a] Running
	I0729 17:19:27.264963   29751 system_pods.go:61] "kube-scheduler-ha-900414" [3d41b818-c8ad-4dbb-bc7b-73f578d33539] Running
	I0729 17:19:27.264971   29751 system_pods.go:61] "kube-scheduler-ha-900414-m02" [f9cc318d-be18-4858-9712-b92f11027b65] Running
	I0729 17:19:27.264977   29751 system_pods.go:61] "kube-scheduler-ha-900414-m03" [7787c02c-b8dc-435f-9e58-52108a528291] Running
	I0729 17:19:27.264984   29751 system_pods.go:61] "kube-vip-ha-900414" [bf3918b4-6cc5-499b-808e-b6c33138cae2] Running
	I0729 17:19:27.264989   29751 system_pods.go:61] "kube-vip-ha-900414-m02" [9fad8ffb-6d3c-44ba-9700-e0e4d70a5f71] Running
	I0729 17:19:27.264993   29751 system_pods.go:61] "kube-vip-ha-900414-m03" [78c34b31-b4d7-4311-9c22-32a2f8fdd948] Running
	I0729 17:19:27.265002   29751 system_pods.go:61] "storage-provisioner" [50fa96e8-1ee5-4e09-a734-802dbcd02bcc] Running
	I0729 17:19:27.265012   29751 system_pods.go:74] duration metric: took 186.095195ms to wait for pod list to return data ...
	I0729 17:19:27.265024   29751 default_sa.go:34] waiting for default service account to be created ...
	I0729 17:19:27.451349   29751 request.go:629] Waited for 186.261672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I0729 17:19:27.451413   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I0729 17:19:27.451419   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:27.451426   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:27.451438   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:27.454883   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:27.455036   29751 default_sa.go:45] found service account: "default"
	I0729 17:19:27.455055   29751 default_sa.go:55] duration metric: took 190.02141ms for default service account to be created ...
	I0729 17:19:27.455066   29751 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 17:19:27.651345   29751 request.go:629] Waited for 196.211598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:19:27.651413   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:19:27.651421   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:27.651444   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:27.651454   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:27.658183   29751 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 17:19:27.664639   29751 system_pods.go:86] 24 kube-system pods found
	I0729 17:19:27.664662   29751 system_pods.go:89] "coredns-7db6d8ff4d-48j6w" [306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d] Running
	I0729 17:19:27.664668   29751 system_pods.go:89] "coredns-7db6d8ff4d-9r87x" [fcc4709f-f07b-4694-a352-aedd9c67bbb2] Running
	I0729 17:19:27.664673   29751 system_pods.go:89] "etcd-ha-900414" [96243a16-1b51-4136-bc25-f3a0da2f7500] Running
	I0729 17:19:27.664677   29751 system_pods.go:89] "etcd-ha-900414-m02" [29c61208-cebd-4d6b-addf-426efcc78899] Running
	I0729 17:19:27.664681   29751 system_pods.go:89] "etcd-ha-900414-m03" [67d8b9ed-d401-4de2-9ef8-c8295c488e29] Running
	I0729 17:19:27.664684   29751 system_pods.go:89] "kindnet-6vzd2" [396c742c-f9b6-4184-84db-7407ba419a86] Running
	I0729 17:19:27.664688   29751 system_pods.go:89] "kindnet-kdzhk" [d86b52ee-7d4c-4530-afa1-88cf8ad77379] Running
	I0729 17:19:27.664695   29751 system_pods.go:89] "kindnet-z9cvz" [c2177daa-4efb-478c-845f-f30e77e91684] Running
	I0729 17:19:27.664700   29751 system_pods.go:89] "kube-apiserver-ha-900414" [2a4045e8-a900-4ebd-b36e-95083ab251c9] Running
	I0729 17:19:27.664706   29751 system_pods.go:89] "kube-apiserver-ha-900414-m02" [28c2e5cf-876b-4b77-b9c7-406642dc4df6] Running
	I0729 17:19:27.664710   29751 system_pods.go:89] "kube-apiserver-ha-900414-m03" [2d5328fb-f6d2-4efc-ab72-0395e6500f21] Running
	I0729 17:19:27.664716   29751 system_pods.go:89] "kube-controller-manager-ha-900414" [62bb9ded-db08-49a0-aea4-8806d0e8d294] Running
	I0729 17:19:27.664720   29751 system_pods.go:89] "kube-controller-manager-ha-900414-m02" [88418c96-4611-4276-91c6-ae9b67d4ae74] Running
	I0729 17:19:27.664727   29751 system_pods.go:89] "kube-controller-manager-ha-900414-m03" [f8b5466c-1783-4f30-b3d1-f5034f7f52af] Running
	I0729 17:19:27.664731   29751 system_pods.go:89] "kube-proxy-bgq99" [0258cc44-f6ff-4294-a621-61b172247e15] Running
	I0729 17:19:27.664736   29751 system_pods.go:89] "kube-proxy-tng4t" [2303269f-50d3-4a63-aa76-891f001e6f5d] Running
	I0729 17:19:27.664740   29751 system_pods.go:89] "kube-proxy-wnfsb" [0322d88f-c31b-4cc7-b073-2f97ab9e047a] Running
	I0729 17:19:27.664746   29751 system_pods.go:89] "kube-scheduler-ha-900414" [3d41b818-c8ad-4dbb-bc7b-73f578d33539] Running
	I0729 17:19:27.664750   29751 system_pods.go:89] "kube-scheduler-ha-900414-m02" [f9cc318d-be18-4858-9712-b92f11027b65] Running
	I0729 17:19:27.664755   29751 system_pods.go:89] "kube-scheduler-ha-900414-m03" [7787c02c-b8dc-435f-9e58-52108a528291] Running
	I0729 17:19:27.664759   29751 system_pods.go:89] "kube-vip-ha-900414" [bf3918b4-6cc5-499b-808e-b6c33138cae2] Running
	I0729 17:19:27.664765   29751 system_pods.go:89] "kube-vip-ha-900414-m02" [9fad8ffb-6d3c-44ba-9700-e0e4d70a5f71] Running
	I0729 17:19:27.664768   29751 system_pods.go:89] "kube-vip-ha-900414-m03" [78c34b31-b4d7-4311-9c22-32a2f8fdd948] Running
	I0729 17:19:27.664774   29751 system_pods.go:89] "storage-provisioner" [50fa96e8-1ee5-4e09-a734-802dbcd02bcc] Running
	I0729 17:19:27.664779   29751 system_pods.go:126] duration metric: took 209.703827ms to wait for k8s-apps to be running ...
	I0729 17:19:27.664788   29751 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 17:19:27.664831   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:19:27.681971   29751 system_svc.go:56] duration metric: took 17.176262ms WaitForService to wait for kubelet
	I0729 17:19:27.681994   29751 kubeadm.go:582] duration metric: took 20.209639319s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:19:27.682013   29751 node_conditions.go:102] verifying NodePressure condition ...
	I0729 17:19:27.851321   29751 request.go:629] Waited for 169.243338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes
	I0729 17:19:27.851403   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes
	I0729 17:19:27.851412   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:27.851423   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:27.851429   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:27.855049   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:27.856452   29751 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:19:27.856489   29751 node_conditions.go:123] node cpu capacity is 2
	I0729 17:19:27.856510   29751 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:19:27.856516   29751 node_conditions.go:123] node cpu capacity is 2
	I0729 17:19:27.856523   29751 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:19:27.856527   29751 node_conditions.go:123] node cpu capacity is 2
	I0729 17:19:27.856532   29751 node_conditions.go:105] duration metric: took 174.51382ms to run NodePressure ...
	I0729 17:19:27.856545   29751 start.go:241] waiting for startup goroutines ...
	I0729 17:19:27.856563   29751 start.go:255] writing updated cluster config ...
	I0729 17:19:27.856897   29751 ssh_runner.go:195] Run: rm -f paused
	I0729 17:19:27.905803   29751 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 17:19:27.908595   29751 out.go:177] * Done! kubectl is now configured to use "ha-900414" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.402322918Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722273785402299373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3d127fa-8e75-4e54-a60d-6eff1b82666a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.403090449Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8923a5a-74fb-44a6-85f7-e4c765924c76 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.403149537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8923a5a-74fb-44a6-85f7-e4c765924c76 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.403392903Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:174e5d31268c70a798e1fa1fe5d2845d98eaed228a11b55810b7ca4680256a8e,PodSandboxId:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722273570293151902,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annotations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127,PodSandboxId:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722273433298849093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2,PodSandboxId:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722273433248509110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b419192dc8add024f08c798a5f50d7c6bd2ee0ae8a2280771508aebc78e20217,PodSandboxId:e8c37c9dd56b7d7518c4f43dfb13701b15d68f51590bd8e492cf12524a18465e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722273433176341595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38,PodSandboxId:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722273421363177930,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9,PodSandboxId:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227341
7715021107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426b48b0fdbff14ce36fc0396074186cbd51533c984e6fac5f3f963bce611059,PodSandboxId:0c097128258009ad8eecfd45367bd8c008515e1f5f2371df23a13194dbe2a20c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222733999
19223743,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08540c30100787432ed84b2f9dea411c,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf,PodSandboxId:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722273397639690260,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b,PodSandboxId:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722273397619697045,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:270db6978c4e4bce98a1f424ce50f66507840c818ab639d9ef02e8f96bab41d6,PodSandboxId:8d445686f72b1716e0c253ae52b4d355d100be01307847cdf5d5287ddbb9e25b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722273397549024916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd71b5556931becb81321807072e3a8100ce3344e4dea3237c6918a6c8e98cc5,PodSandboxId:49589b3e6647a2c3217bb88a13f7c1a69fce9ea3ae44163d31496bd19c36d434,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722273397500791848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8923a5a-74fb-44a6-85f7-e4c765924c76 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.448504579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=050678c8-c4cf-4b86-9abe-269a9ef80050 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.448794920Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=050678c8-c4cf-4b86-9abe-269a9ef80050 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.450132475Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7a81e13-477c-485c-8b67-9dc5e0fd9079 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.450582930Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722273785450561761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7a81e13-477c-485c-8b67-9dc5e0fd9079 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.451341460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3b1f2e5-9fd6-4b16-9f90-facd1285bb24 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.451400825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3b1f2e5-9fd6-4b16-9f90-facd1285bb24 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.451643187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:174e5d31268c70a798e1fa1fe5d2845d98eaed228a11b55810b7ca4680256a8e,PodSandboxId:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722273570293151902,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annotations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127,PodSandboxId:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722273433298849093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2,PodSandboxId:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722273433248509110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b419192dc8add024f08c798a5f50d7c6bd2ee0ae8a2280771508aebc78e20217,PodSandboxId:e8c37c9dd56b7d7518c4f43dfb13701b15d68f51590bd8e492cf12524a18465e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722273433176341595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38,PodSandboxId:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722273421363177930,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9,PodSandboxId:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227341
7715021107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426b48b0fdbff14ce36fc0396074186cbd51533c984e6fac5f3f963bce611059,PodSandboxId:0c097128258009ad8eecfd45367bd8c008515e1f5f2371df23a13194dbe2a20c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222733999
19223743,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08540c30100787432ed84b2f9dea411c,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf,PodSandboxId:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722273397639690260,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b,PodSandboxId:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722273397619697045,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:270db6978c4e4bce98a1f424ce50f66507840c818ab639d9ef02e8f96bab41d6,PodSandboxId:8d445686f72b1716e0c253ae52b4d355d100be01307847cdf5d5287ddbb9e25b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722273397549024916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd71b5556931becb81321807072e3a8100ce3344e4dea3237c6918a6c8e98cc5,PodSandboxId:49589b3e6647a2c3217bb88a13f7c1a69fce9ea3ae44163d31496bd19c36d434,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722273397500791848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3b1f2e5-9fd6-4b16-9f90-facd1285bb24 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.488182659Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f2fabf9c-3f4d-47c8-aed4-a8c1773ee6f1 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.488256062Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2fabf9c-3f4d-47c8-aed4-a8c1773ee6f1 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.489308132Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a5845c3-a1c2-4f88-b5bc-06a4bee92276 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.490069614Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722273785490043147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a5845c3-a1c2-4f88-b5bc-06a4bee92276 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.491105205Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4bfc19d-52d0-4209-ab55-4172ff34bca3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.491158274Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4bfc19d-52d0-4209-ab55-4172ff34bca3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.491389474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:174e5d31268c70a798e1fa1fe5d2845d98eaed228a11b55810b7ca4680256a8e,PodSandboxId:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722273570293151902,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annotations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127,PodSandboxId:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722273433298849093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2,PodSandboxId:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722273433248509110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b419192dc8add024f08c798a5f50d7c6bd2ee0ae8a2280771508aebc78e20217,PodSandboxId:e8c37c9dd56b7d7518c4f43dfb13701b15d68f51590bd8e492cf12524a18465e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722273433176341595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38,PodSandboxId:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722273421363177930,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9,PodSandboxId:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227341
7715021107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426b48b0fdbff14ce36fc0396074186cbd51533c984e6fac5f3f963bce611059,PodSandboxId:0c097128258009ad8eecfd45367bd8c008515e1f5f2371df23a13194dbe2a20c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222733999
19223743,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08540c30100787432ed84b2f9dea411c,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf,PodSandboxId:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722273397639690260,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b,PodSandboxId:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722273397619697045,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:270db6978c4e4bce98a1f424ce50f66507840c818ab639d9ef02e8f96bab41d6,PodSandboxId:8d445686f72b1716e0c253ae52b4d355d100be01307847cdf5d5287ddbb9e25b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722273397549024916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd71b5556931becb81321807072e3a8100ce3344e4dea3237c6918a6c8e98cc5,PodSandboxId:49589b3e6647a2c3217bb88a13f7c1a69fce9ea3ae44163d31496bd19c36d434,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722273397500791848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4bfc19d-52d0-4209-ab55-4172ff34bca3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.533733022Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4cbcf5db-0ef9-46d7-abb6-13ebdff14705 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.533810670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4cbcf5db-0ef9-46d7-abb6-13ebdff14705 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.535480905Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6bea6457-94ab-4a48-ab86-6351713e593d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.535920097Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722273785535898795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bea6457-94ab-4a48-ab86-6351713e593d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.536734873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=236e8a74-e64c-459e-96fe-aee00b2f1d0f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.536788673Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=236e8a74-e64c-459e-96fe-aee00b2f1d0f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:23:05 ha-900414 crio[684]: time="2024-07-29 17:23:05.537253564Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:174e5d31268c70a798e1fa1fe5d2845d98eaed228a11b55810b7ca4680256a8e,PodSandboxId:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722273570293151902,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annotations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127,PodSandboxId:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722273433298849093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2,PodSandboxId:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722273433248509110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b419192dc8add024f08c798a5f50d7c6bd2ee0ae8a2280771508aebc78e20217,PodSandboxId:e8c37c9dd56b7d7518c4f43dfb13701b15d68f51590bd8e492cf12524a18465e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722273433176341595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38,PodSandboxId:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722273421363177930,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9,PodSandboxId:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227341
7715021107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426b48b0fdbff14ce36fc0396074186cbd51533c984e6fac5f3f963bce611059,PodSandboxId:0c097128258009ad8eecfd45367bd8c008515e1f5f2371df23a13194dbe2a20c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222733999
19223743,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08540c30100787432ed84b2f9dea411c,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf,PodSandboxId:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722273397639690260,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b,PodSandboxId:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722273397619697045,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:270db6978c4e4bce98a1f424ce50f66507840c818ab639d9ef02e8f96bab41d6,PodSandboxId:8d445686f72b1716e0c253ae52b4d355d100be01307847cdf5d5287ddbb9e25b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722273397549024916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd71b5556931becb81321807072e3a8100ce3344e4dea3237c6918a6c8e98cc5,PodSandboxId:49589b3e6647a2c3217bb88a13f7c1a69fce9ea3ae44163d31496bd19c36d434,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722273397500791848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=236e8a74-e64c-459e-96fe-aee00b2f1d0f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	174e5d31268c7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   7d2a64a5bcccd       busybox-fc5497c4f-4fv4t
	7d7ffaf9ef2fd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   7a0bb58ad2b90       coredns-7db6d8ff4d-9r87x
	911569fe2373d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   f47facc78da61       coredns-7db6d8ff4d-48j6w
	b419192dc8add       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   e8c37c9dd56b7       storage-provisioner
	10b182b72bc50       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   30715fa1b9f02       kindnet-z9cvz
	37ef29620e9c9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   250f31f0996e1       kube-proxy-tng4t
	426b48b0fdbff       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   0c09712825800       kube-vip-ha-900414
	a7721018288f9       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   e2a054b42822a       kube-scheduler-ha-900414
	2a27f5a54bd43       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   46030b1ba43cf       etcd-ha-900414
	270db6978c4e4       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   8d445686f72b1       kube-controller-manager-ha-900414
	dd71b5556931b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   49589b3e6647a       kube-apiserver-ha-900414
	
	
	==> coredns [7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127] <==
	[INFO] 10.244.2.2:33776 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00179506s
	[INFO] 10.244.0.4:52013 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177704s
	[INFO] 10.244.0.4:35837 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126238s
	[INFO] 10.244.0.4:49524 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118581s
	[INFO] 10.244.1.2:48270 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204422s
	[INFO] 10.244.1.2:35645 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001855497s
	[INFO] 10.244.1.2:43192 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177303s
	[INFO] 10.244.1.2:33281 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160789s
	[INFO] 10.244.1.2:57013 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097416s
	[INFO] 10.244.2.2:38166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136029s
	[INFO] 10.244.2.2:33640 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001913014s
	[INFO] 10.244.2.2:47485 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104905s
	[INFO] 10.244.2.2:45778 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170534s
	[INFO] 10.244.2.2:59234 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076101s
	[INFO] 10.244.0.4:50535 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065536s
	[INFO] 10.244.1.2:58622 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133396s
	[INFO] 10.244.1.2:33438 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102338s
	[INFO] 10.244.2.2:45926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000383812s
	[INFO] 10.244.2.2:56980 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000187545s
	[INFO] 10.244.2.2:43137 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00016801s
	[INFO] 10.244.0.4:57612 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000159389s
	[INFO] 10.244.1.2:58047 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014126s
	[INFO] 10.244.1.2:45045 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123813s
	[INFO] 10.244.2.2:35311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173973s
	[INFO] 10.244.2.2:47044 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140928s
	
	
	==> coredns [911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2] <==
	[INFO] 10.244.2.2:54591 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000473416s
	[INFO] 10.244.0.4:51118 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 1.682006305s
	[INFO] 10.244.0.4:48566 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231629s
	[INFO] 10.244.0.4:43462 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000157968s
	[INFO] 10.244.0.4:46703 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.079065283s
	[INFO] 10.244.0.4:43001 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165887s
	[INFO] 10.244.1.2:43677 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128129s
	[INFO] 10.244.1.2:39513 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001354968s
	[INFO] 10.244.1.2:52828 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183362s
	[INFO] 10.244.2.2:51403 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116578s
	[INFO] 10.244.2.2:47706 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001162998s
	[INFO] 10.244.2.2:39349 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083497s
	[INFO] 10.244.0.4:43643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164666s
	[INFO] 10.244.0.4:51941 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067553s
	[INFO] 10.244.0.4:33186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117492s
	[INFO] 10.244.1.2:36002 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170421s
	[INFO] 10.244.1.2:41186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135424s
	[INFO] 10.244.2.2:40469 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015464s
	[INFO] 10.244.0.4:58750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131521s
	[INFO] 10.244.0.4:59782 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141641s
	[INFO] 10.244.0.4:47289 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000189592s
	[INFO] 10.244.1.2:44743 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121922s
	[INFO] 10.244.1.2:60901 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099491s
	[INFO] 10.244.2.2:53612 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000143831s
	[INFO] 10.244.2.2:35693 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120049s
	
	
	==> describe nodes <==
	Name:               ha-900414
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-900414
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=ha-900414
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T17_16_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:16:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-900414
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:23:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:19:47 +0000   Mon, 29 Jul 2024 17:16:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:19:47 +0000   Mon, 29 Jul 2024 17:16:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:19:47 +0000   Mon, 29 Jul 2024 17:16:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:19:47 +0000   Mon, 29 Jul 2024 17:17:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    ha-900414
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0301ef966ab4d039cde4e4959e83ea6
	  System UUID:                d0301ef9-66ab-4d03-9cde-4e4959e83ea6
	  Boot ID:                    ea7d1983-2f49-4874-b67f-d8eea13c27d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4fv4t              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 coredns-7db6d8ff4d-48j6w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m8s
	  kube-system                 coredns-7db6d8ff4d-9r87x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m8s
	  kube-system                 etcd-ha-900414                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m22s
	  kube-system                 kindnet-z9cvz                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m8s
	  kube-system                 kube-apiserver-ha-900414             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-controller-manager-ha-900414    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-proxy-tng4t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-scheduler-ha-900414             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-vip-ha-900414                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m7s   kube-proxy       
	  Normal  Starting                 6m22s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m22s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m22s  kubelet          Node ha-900414 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m22s  kubelet          Node ha-900414 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m22s  kubelet          Node ha-900414 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m9s   node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	  Normal  NodeReady                5m53s  kubelet          Node ha-900414 status is now: NodeReady
	  Normal  RegisteredNode           4m55s  node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	  Normal  RegisteredNode           3m43s  node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	
	
	Name:               ha-900414-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-900414-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=ha-900414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_17_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:17:51 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-900414-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:20:45 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 17:19:53 +0000   Mon, 29 Jul 2024 17:21:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 17:19:53 +0000   Mon, 29 Jul 2024 17:21:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 17:19:53 +0000   Mon, 29 Jul 2024 17:21:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 17:19:53 +0000   Mon, 29 Jul 2024 17:21:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    ha-900414-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 854b5d80a28944e1a0d7e90a65ef964f
	  System UUID:                854b5d80-a289-44e1-a0d7-e90a65ef964f
	  Boot ID:                    b75c3f88-64bd-447d-a8b3-d30def6f548b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dqz55                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 etcd-ha-900414-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m12s
	  kube-system                 kindnet-kdzhk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m14s
	  kube-system                 kube-apiserver-ha-900414-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-controller-manager-ha-900414-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-proxy-bgq99                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-scheduler-ha-900414-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-vip-ha-900414-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m10s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m14s                  node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m14s (x8 over 5m15s)  kubelet          Node ha-900414-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m14s (x8 over 5m15s)  kubelet          Node ha-900414-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m14s (x7 over 5m15s)  kubelet          Node ha-900414-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m55s                  node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	  Normal  RegisteredNode           3m43s                  node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	  Normal  NodeNotReady             99s                    node-controller  Node ha-900414-m02 status is now: NodeNotReady
	
	
	Name:               ha-900414-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-900414-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=ha-900414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_19_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:19:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-900414-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:22:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:19:32 +0000   Mon, 29 Jul 2024 17:19:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:19:32 +0000   Mon, 29 Jul 2024 17:19:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:19:32 +0000   Mon, 29 Jul 2024 17:19:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:19:32 +0000   Mon, 29 Jul 2024 17:19:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    ha-900414-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a83fa48485e44a66899d03b0bc3026ab
	  System UUID:                a83fa484-85e4-4a66-899d-03b0bc3026ab
	  Boot ID:                    b5b7f427-05a9-48d1-b8b4-44023d1602b3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s9sz8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 etcd-ha-900414-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m1s
	  kube-system                 kindnet-6vzd2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m3s
	  kube-system                 kube-apiserver-ha-900414-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-controller-manager-ha-900414-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-proxy-wnfsb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-ha-900414-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-vip-ha-900414-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m3s (x8 over 4m3s)  kubelet          Node ha-900414-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x8 over 4m3s)  kubelet          Node ha-900414-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x7 over 4m3s)  kubelet          Node ha-900414-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node ha-900414-m03 event: Registered Node ha-900414-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-900414-m03 event: Registered Node ha-900414-m03 in Controller
	  Normal  RegisteredNode           3m43s                node-controller  Node ha-900414-m03 event: Registered Node ha-900414-m03 in Controller
	
	
	Name:               ha-900414-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-900414-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=ha-900414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_20_07_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:20:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-900414-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:23:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:20:37 +0000   Mon, 29 Jul 2024 17:20:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:20:37 +0000   Mon, 29 Jul 2024 17:20:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:20:37 +0000   Mon, 29 Jul 2024 17:20:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:20:37 +0000   Mon, 29 Jul 2024 17:20:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    ha-900414-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 82b534ad740b47cbae65e1e5acf41d9a
	  System UUID:                82b534ad-740b-47cb-ae65-e1e5acf41d9a
	  Boot ID:                    0dc8577a-0725-49e3-80b3-d7aff48b060d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4fsvj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m59s
	  kube-system                 kube-proxy-hf5lx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m59s (x3 over 2m59s)  kubelet          Node ha-900414-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m59s (x3 over 2m59s)  kubelet          Node ha-900414-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m59s (x3 over 2m59s)  kubelet          Node ha-900414-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m58s                  node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal  RegisteredNode           2m55s                  node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal  RegisteredNode           2m54s                  node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal  NodeReady                2m40s                  kubelet          Node ha-900414-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul29 17:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050825] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040072] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.771696] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.434213] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.575347] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.778112] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.061146] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062274] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.167660] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.152034] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.281641] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.241432] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +5.177057] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.055961] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.040447] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.087819] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.082632] kauditd_printk_skb: 21 callbacks suppressed
	[Jul29 17:17] kauditd_printk_skb: 38 callbacks suppressed
	[ +45.217420] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b] <==
	{"level":"warn","ts":"2024-07-29T17:23:05.642056Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.661724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.815004Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.833271Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.835185Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.841445Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.843373Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.85004Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.855629Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.869561Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.876779Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.886028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.889826Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.893176Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.900902Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.907011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.913122Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.917406Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.920184Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.925702Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.931406Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.93547Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.945696Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.95256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:23:05.961457Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:23:06 up 7 min,  0 users,  load average: 0.13, 0.18, 0.10
	Linux ha-900414 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38] <==
	I0729 17:22:32.573516       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:22:42.578720       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:22:42.579509       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:22:42.580240       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:22:42.580296       1 main.go:299] handling current node
	I0729 17:22:42.580329       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:22:42.580347       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:22:42.580437       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0729 17:22:42.580456       1 main.go:322] Node ha-900414-m03 has CIDR [10.244.2.0/24] 
	I0729 17:22:52.579131       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:22:52.579191       1 main.go:299] handling current node
	I0729 17:22:52.579209       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:22:52.579215       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:22:52.579381       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0729 17:22:52.579408       1 main.go:322] Node ha-900414-m03 has CIDR [10.244.2.0/24] 
	I0729 17:22:52.579464       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:22:52.579485       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:23:02.569697       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0729 17:23:02.569918       1 main.go:322] Node ha-900414-m03 has CIDR [10.244.2.0/24] 
	I0729 17:23:02.570916       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:23:02.571019       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:23:02.571257       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:23:02.571342       1 main.go:299] handling current node
	I0729 17:23:02.571358       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:23:02.571431       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [dd71b5556931becb81321807072e3a8100ce3344e4dea3237c6918a6c8e98cc5] <==
	I0729 17:16:56.589158       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0729 17:16:57.040569       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0729 17:19:31.384720       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45810: use of closed network connection
	E0729 17:19:31.571363       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45816: use of closed network connection
	E0729 17:19:31.761636       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45832: use of closed network connection
	E0729 17:19:33.691188       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45858: use of closed network connection
	E0729 17:19:33.869730       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45868: use of closed network connection
	E0729 17:19:34.044313       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45892: use of closed network connection
	E0729 17:19:34.218790       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45908: use of closed network connection
	E0729 17:19:34.404623       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45926: use of closed network connection
	E0729 17:19:34.600229       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45946: use of closed network connection
	E0729 17:19:34.890385       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45968: use of closed network connection
	E0729 17:19:35.068689       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45988: use of closed network connection
	E0729 17:19:35.247808       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46000: use of closed network connection
	E0729 17:19:35.438480       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46016: use of closed network connection
	E0729 17:19:35.616766       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46036: use of closed network connection
	E0729 17:19:35.788203       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46054: use of closed network connection
	E0729 17:20:07.236783       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0729 17:20:07.237465       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0729 17:20:07.237564       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 53.071µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0729 17:20:07.237641       1 wrap.go:54] timeout or abort while handling: method=GET URI="/api/v1/nodes/ha-900414-m04?timeout=10s" audit-ID="7f040c2d-02c2-4f9f-aecf-dc7d5824210b"
	E0729 17:20:07.237680       1 timeout.go:142] post-timeout activity - time-elapsed: 4.344µs, GET "/api/v1/nodes/ha-900414-m04" result: <nil>
	E0729 17:20:07.237867       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0729 17:20:07.238860       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0729 17:20:07.239080       1 timeout.go:142] post-timeout activity - time-elapsed: 1.768546ms, PATCH "/api/v1/namespaces/default/events/ha-900414-m04.17e6beb84d7e621f" result: <nil>
	
	
	==> kube-controller-manager [270db6978c4e4bce98a1f424ce50f66507840c818ab639d9ef02e8f96bab41d6] <==
	I0729 17:17:51.236250       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-900414-m02"
	I0729 17:19:02.587222       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-900414-m03\" does not exist"
	I0729 17:19:02.619098       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-900414-m03" podCIDRs=["10.244.2.0/24"]
	I0729 17:19:06.272231       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-900414-m03"
	I0729 17:19:28.821509       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.230041ms"
	I0729 17:19:28.901804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.604594ms"
	I0729 17:19:29.055835       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="153.722953ms"
	E0729 17:19:29.056163       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0729 17:19:29.274045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="217.381236ms"
	I0729 17:19:29.320884       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.624829ms"
	I0729 17:19:29.321581       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.563µs"
	I0729 17:19:30.511849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.847622ms"
	I0729 17:19:30.512131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.623µs"
	I0729 17:19:30.575143       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.563726ms"
	I0729 17:19:30.575478       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.565µs"
	I0729 17:19:30.944109       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.524427ms"
	I0729 17:19:30.944603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.801µs"
	E0729 17:20:06.581048       1 certificate_controller.go:146] Sync csr-gb6wr failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-gb6wr": the object has been modified; please apply your changes to the latest version and try again
	I0729 17:20:06.814275       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-900414-m04\" does not exist"
	I0729 17:20:06.986373       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-900414-m04" podCIDRs=["10.244.3.0/24"]
	I0729 17:20:11.300187       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-900414-m04"
	I0729 17:20:25.165063       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-900414-m04"
	I0729 17:21:26.327741       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-900414-m04"
	I0729 17:21:26.488890       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.445665ms"
	I0729 17:21:26.489088       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.001µs"
	
	
	==> kube-proxy [37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9] <==
	I0729 17:16:58.224057       1 server_linux.go:69] "Using iptables proxy"
	I0729 17:16:58.277839       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.114"]
	I0729 17:16:58.378129       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 17:16:58.378189       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 17:16:58.378211       1 server_linux.go:165] "Using iptables Proxier"
	I0729 17:16:58.386893       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 17:16:58.387187       1 server.go:872] "Version info" version="v1.30.3"
	I0729 17:16:58.387218       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:16:58.391582       1 config.go:192] "Starting service config controller"
	I0729 17:16:58.392030       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 17:16:58.392111       1 config.go:101] "Starting endpoint slice config controller"
	I0729 17:16:58.392133       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 17:16:58.394091       1 config.go:319] "Starting node config controller"
	I0729 17:16:58.394116       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 17:16:58.492785       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 17:16:58.492841       1 shared_informer.go:320] Caches are synced for service config
	I0729 17:16:58.495011       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf] <==
	E0729 17:16:42.148247       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 17:16:42.554255       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 17:16:42.554382       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 17:16:44.341514       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 17:19:28.758828       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="c02b335b-93e1-41d5-b53c-fc95bf6ecd59" pod="default/busybox-fc5497c4f-dqz55" assumedNode="ha-900414-m02" currentNode="ha-900414-m03"
	E0729 17:19:28.768612       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-dqz55\": pod busybox-fc5497c4f-dqz55 is already assigned to node \"ha-900414-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-dqz55" node="ha-900414-m03"
	E0729 17:19:28.768738       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c02b335b-93e1-41d5-b53c-fc95bf6ecd59(default/busybox-fc5497c4f-dqz55) was assumed on ha-900414-m03 but assigned to ha-900414-m02" pod="default/busybox-fc5497c4f-dqz55"
	E0729 17:19:28.768763       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-dqz55\": pod busybox-fc5497c4f-dqz55 is already assigned to node \"ha-900414-m02\"" pod="default/busybox-fc5497c4f-dqz55"
	I0729 17:19:28.768804       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-dqz55" node="ha-900414-m02"
	E0729 17:19:28.803899       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-s9sz8\": pod busybox-fc5497c4f-s9sz8 is already assigned to node \"ha-900414-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-s9sz8" node="ha-900414-m03"
	E0729 17:19:28.804818       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0a2e4648-8455-4ecc-bfcc-5642bfdbb2fe(default/busybox-fc5497c4f-s9sz8) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-s9sz8"
	E0729 17:19:28.805238       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-s9sz8\": pod busybox-fc5497c4f-s9sz8 is already assigned to node \"ha-900414-m03\"" pod="default/busybox-fc5497c4f-s9sz8"
	I0729 17:19:28.805313       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-s9sz8" node="ha-900414-m03"
	E0729 17:19:28.839698       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-4fv4t\": pod busybox-fc5497c4f-4fv4t is already assigned to node \"ha-900414\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-4fv4t" node="ha-900414"
	E0729 17:19:28.840078       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6(default/busybox-fc5497c4f-4fv4t) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-4fv4t"
	E0729 17:19:28.840422       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-4fv4t\": pod busybox-fc5497c4f-4fv4t is already assigned to node \"ha-900414\"" pod="default/busybox-fc5497c4f-4fv4t"
	I0729 17:19:28.840664       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-4fv4t" node="ha-900414"
	E0729 17:20:07.262272       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hf5lx\": pod kube-proxy-hf5lx is already assigned to node \"ha-900414-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hf5lx" node="ha-900414-m04"
	E0729 17:20:07.263149       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hf5lx\": pod kube-proxy-hf5lx is already assigned to node \"ha-900414-m04\"" pod="kube-system/kube-proxy-hf5lx"
	E0729 17:20:07.264308       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4fsvj\": pod kindnet-4fsvj is already assigned to node \"ha-900414-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4fsvj" node="ha-900414-m04"
	E0729 17:20:07.264446       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4fsvj\": pod kindnet-4fsvj is already assigned to node \"ha-900414-m04\"" pod="kube-system/kindnet-4fsvj"
	E0729 17:20:07.308186       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rbc8g\": pod kindnet-rbc8g is already assigned to node \"ha-900414-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rbc8g" node="ha-900414-m04"
	E0729 17:20:07.308315       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod fa8621a0-f2ea-48fe-8912-76fdd3bd193f(kube-system/kindnet-rbc8g) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rbc8g"
	E0729 17:20:07.309175       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rbc8g\": pod kindnet-rbc8g is already assigned to node \"ha-900414-m04\"" pod="kube-system/kindnet-rbc8g"
	I0729 17:20:07.309262       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rbc8g" node="ha-900414-m04"
	
	
	==> kubelet <==
	Jul 29 17:18:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:18:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:19:28 ha-900414 kubelet[1377]: I0729 17:19:28.818678    1377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=150.818605227 podStartE2EDuration="2m30.818605227s" podCreationTimestamp="2024-07-29 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 17:17:14.031027508 +0000 UTC m=+30.460839998" watchObservedRunningTime="2024-07-29 17:19:28.818605227 +0000 UTC m=+165.248417717"
	Jul 29 17:19:28 ha-900414 kubelet[1377]: I0729 17:19:28.819510    1377 topology_manager.go:215] "Topology Admit Handler" podUID="bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6" podNamespace="default" podName="busybox-fc5497c4f-4fv4t"
	Jul 29 17:19:28 ha-900414 kubelet[1377]: I0729 17:19:28.899965    1377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scrxm\" (UniqueName: \"kubernetes.io/projected/bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6-kube-api-access-scrxm\") pod \"busybox-fc5497c4f-4fv4t\" (UID: \"bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6\") " pod="default/busybox-fc5497c4f-4fv4t"
	Jul 29 17:19:43 ha-900414 kubelet[1377]: E0729 17:19:43.789901    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:19:43 ha-900414 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:19:43 ha-900414 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:19:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:19:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:20:43 ha-900414 kubelet[1377]: E0729 17:20:43.786522    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:20:43 ha-900414 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:20:43 ha-900414 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:20:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:20:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:21:43 ha-900414 kubelet[1377]: E0729 17:21:43.785651    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:21:43 ha-900414 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:21:43 ha-900414 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:21:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:21:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:22:43 ha-900414 kubelet[1377]: E0729 17:22:43.786632    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:22:43 ha-900414 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:22:43 ha-900414 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:22:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:22:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-900414 -n ha-900414
helpers_test.go:261: (dbg) Run:  kubectl --context ha-900414 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (58.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr: exit status 3 (3.199233949s)

                                                
                                                
-- stdout --
	ha-900414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-900414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:23:10.513123   35041 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:23:10.513582   35041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:23:10.513599   35041 out.go:304] Setting ErrFile to fd 2...
	I0729 17:23:10.513606   35041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:23:10.514084   35041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:23:10.514507   35041 out.go:298] Setting JSON to false
	I0729 17:23:10.514534   35041 mustload.go:65] Loading cluster: ha-900414
	I0729 17:23:10.514585   35041 notify.go:220] Checking for updates...
	I0729 17:23:10.514873   35041 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:23:10.514887   35041 status.go:255] checking status of ha-900414 ...
	I0729 17:23:10.515244   35041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:10.515303   35041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:10.536688   35041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46329
	I0729 17:23:10.537087   35041 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:10.537644   35041 main.go:141] libmachine: Using API Version  1
	I0729 17:23:10.537669   35041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:10.538104   35041 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:10.538325   35041 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:23:10.539995   35041 status.go:330] ha-900414 host status = "Running" (err=<nil>)
	I0729 17:23:10.540020   35041 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:23:10.540312   35041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:10.540357   35041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:10.555470   35041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I0729 17:23:10.555825   35041 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:10.556277   35041 main.go:141] libmachine: Using API Version  1
	I0729 17:23:10.556302   35041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:10.556671   35041 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:10.556847   35041 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:23:10.559305   35041 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:10.559782   35041 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:23:10.559807   35041 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:10.559959   35041 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:23:10.560259   35041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:10.560303   35041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:10.574416   35041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45895
	I0729 17:23:10.574736   35041 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:10.575238   35041 main.go:141] libmachine: Using API Version  1
	I0729 17:23:10.575260   35041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:10.575564   35041 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:10.575754   35041 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:23:10.575934   35041 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:10.575961   35041 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:23:10.578482   35041 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:10.578881   35041 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:23:10.578913   35041 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:10.579067   35041 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:23:10.579247   35041 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:23:10.579404   35041 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:23:10.579573   35041 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:23:10.666052   35041 ssh_runner.go:195] Run: systemctl --version
	I0729 17:23:10.672239   35041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:10.687219   35041 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:23:10.687249   35041 api_server.go:166] Checking apiserver status ...
	I0729 17:23:10.687285   35041 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:23:10.701279   35041 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup
	W0729 17:23:10.711185   35041 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:23:10.711246   35041 ssh_runner.go:195] Run: ls
	I0729 17:23:10.715688   35041 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:23:10.721427   35041 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:23:10.721452   35041 status.go:422] ha-900414 apiserver status = Running (err=<nil>)
	I0729 17:23:10.721465   35041 status.go:257] ha-900414 status: &{Name:ha-900414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:10.721492   35041 status.go:255] checking status of ha-900414-m02 ...
	I0729 17:23:10.721884   35041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:10.721927   35041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:10.737329   35041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41287
	I0729 17:23:10.737715   35041 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:10.738158   35041 main.go:141] libmachine: Using API Version  1
	I0729 17:23:10.738177   35041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:10.738496   35041 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:10.738656   35041 main.go:141] libmachine: (ha-900414-m02) Calling .GetState
	I0729 17:23:10.740025   35041 status.go:330] ha-900414-m02 host status = "Running" (err=<nil>)
	I0729 17:23:10.740040   35041 host.go:66] Checking if "ha-900414-m02" exists ...
	I0729 17:23:10.740305   35041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:10.740335   35041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:10.754205   35041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0729 17:23:10.754610   35041 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:10.755036   35041 main.go:141] libmachine: Using API Version  1
	I0729 17:23:10.755061   35041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:10.755347   35041 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:10.755523   35041 main.go:141] libmachine: (ha-900414-m02) Calling .GetIP
	I0729 17:23:10.758375   35041 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:10.758780   35041 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:23:10.758808   35041 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:10.758942   35041 host.go:66] Checking if "ha-900414-m02" exists ...
	I0729 17:23:10.759233   35041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:10.759272   35041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:10.774392   35041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36841
	I0729 17:23:10.774717   35041 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:10.775112   35041 main.go:141] libmachine: Using API Version  1
	I0729 17:23:10.775131   35041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:10.775463   35041 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:10.775654   35041 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:23:10.775817   35041 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:10.775831   35041 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:23:10.778343   35041 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:10.778731   35041 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:23:10.778755   35041 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:10.778904   35041 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:23:10.779060   35041 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:23:10.779190   35041 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:23:10.779318   35041 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa Username:docker}
	W0729 17:23:13.330625   35041 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	W0729 17:23:13.330728   35041 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	E0729 17:23:13.330750   35041 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0729 17:23:13.330762   35041 status.go:257] ha-900414-m02 status: &{Name:ha-900414-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 17:23:13.330785   35041 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0729 17:23:13.330798   35041 status.go:255] checking status of ha-900414-m03 ...
	I0729 17:23:13.331127   35041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:13.331170   35041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:13.347108   35041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43099
	I0729 17:23:13.347531   35041 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:13.347917   35041 main.go:141] libmachine: Using API Version  1
	I0729 17:23:13.347939   35041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:13.348258   35041 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:13.348452   35041 main.go:141] libmachine: (ha-900414-m03) Calling .GetState
	I0729 17:23:13.350108   35041 status.go:330] ha-900414-m03 host status = "Running" (err=<nil>)
	I0729 17:23:13.350124   35041 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:23:13.350509   35041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:13.350548   35041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:13.364534   35041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45453
	I0729 17:23:13.364925   35041 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:13.365409   35041 main.go:141] libmachine: Using API Version  1
	I0729 17:23:13.365428   35041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:13.365762   35041 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:13.365964   35041 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:23:13.368833   35041 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:13.369243   35041 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:23:13.369269   35041 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:13.369376   35041 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:23:13.369659   35041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:13.369689   35041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:13.383681   35041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I0729 17:23:13.384180   35041 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:13.384730   35041 main.go:141] libmachine: Using API Version  1
	I0729 17:23:13.384749   35041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:13.385036   35041 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:13.385191   35041 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:23:13.385363   35041 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:13.385383   35041 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:23:13.388039   35041 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:13.388410   35041 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:23:13.388436   35041 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:13.388532   35041 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:23:13.388684   35041 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:23:13.388785   35041 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:23:13.388909   35041 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:23:13.470589   35041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:13.484936   35041 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:23:13.484958   35041 api_server.go:166] Checking apiserver status ...
	I0729 17:23:13.484985   35041 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:23:13.500279   35041 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup
	W0729 17:23:13.510073   35041 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:23:13.510115   35041 ssh_runner.go:195] Run: ls
	I0729 17:23:13.514588   35041 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:23:13.520551   35041 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:23:13.520574   35041 status.go:422] ha-900414-m03 apiserver status = Running (err=<nil>)
	I0729 17:23:13.520585   35041 status.go:257] ha-900414-m03 status: &{Name:ha-900414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:13.520602   35041 status.go:255] checking status of ha-900414-m04 ...
	I0729 17:23:13.520942   35041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:13.520974   35041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:13.536176   35041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32885
	I0729 17:23:13.536513   35041 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:13.536999   35041 main.go:141] libmachine: Using API Version  1
	I0729 17:23:13.537024   35041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:13.537297   35041 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:13.537474   35041 main.go:141] libmachine: (ha-900414-m04) Calling .GetState
	I0729 17:23:13.538898   35041 status.go:330] ha-900414-m04 host status = "Running" (err=<nil>)
	I0729 17:23:13.538917   35041 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:23:13.539219   35041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:13.539259   35041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:13.553464   35041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34563
	I0729 17:23:13.553899   35041 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:13.554391   35041 main.go:141] libmachine: Using API Version  1
	I0729 17:23:13.554411   35041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:13.554672   35041 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:13.554853   35041 main.go:141] libmachine: (ha-900414-m04) Calling .GetIP
	I0729 17:23:13.557654   35041 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:13.558046   35041 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:23:13.558068   35041 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:13.558208   35041 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:23:13.558521   35041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:13.558557   35041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:13.573864   35041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0729 17:23:13.574278   35041 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:13.574765   35041 main.go:141] libmachine: Using API Version  1
	I0729 17:23:13.574787   35041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:13.575088   35041 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:13.575257   35041 main.go:141] libmachine: (ha-900414-m04) Calling .DriverName
	I0729 17:23:13.575414   35041 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:13.575440   35041 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHHostname
	I0729 17:23:13.577800   35041 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:13.578230   35041 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:23:13.578251   35041 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:13.578375   35041 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHPort
	I0729 17:23:13.578529   35041 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHKeyPath
	I0729 17:23:13.578666   35041 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHUsername
	I0729 17:23:13.578791   35041 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m04/id_rsa Username:docker}
	I0729 17:23:13.658031   35041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:13.674326   35041 status.go:257] ha-900414-m04 status: &{Name:ha-900414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr: exit status 3 (4.778297783s)

                                                
                                                
-- stdout --
	ha-900414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-900414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:23:15.084564   35125 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:23:15.084682   35125 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:23:15.084690   35125 out.go:304] Setting ErrFile to fd 2...
	I0729 17:23:15.084694   35125 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:23:15.084855   35125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:23:15.085007   35125 out.go:298] Setting JSON to false
	I0729 17:23:15.085028   35125 mustload.go:65] Loading cluster: ha-900414
	I0729 17:23:15.085085   35125 notify.go:220] Checking for updates...
	I0729 17:23:15.085465   35125 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:23:15.085481   35125 status.go:255] checking status of ha-900414 ...
	I0729 17:23:15.085959   35125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:15.085994   35125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:15.103619   35125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41693
	I0729 17:23:15.104028   35125 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:15.104638   35125 main.go:141] libmachine: Using API Version  1
	I0729 17:23:15.104660   35125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:15.105016   35125 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:15.105227   35125 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:23:15.106688   35125 status.go:330] ha-900414 host status = "Running" (err=<nil>)
	I0729 17:23:15.106704   35125 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:23:15.106978   35125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:15.107019   35125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:15.121757   35125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39379
	I0729 17:23:15.122170   35125 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:15.122545   35125 main.go:141] libmachine: Using API Version  1
	I0729 17:23:15.122564   35125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:15.122900   35125 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:15.123051   35125 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:23:15.125646   35125 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:15.126037   35125 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:23:15.126053   35125 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:15.126216   35125 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:23:15.126607   35125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:15.126685   35125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:15.141794   35125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42751
	I0729 17:23:15.142145   35125 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:15.142615   35125 main.go:141] libmachine: Using API Version  1
	I0729 17:23:15.142637   35125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:15.142911   35125 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:15.143128   35125 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:23:15.143305   35125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:15.143347   35125 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:23:15.146121   35125 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:15.146495   35125 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:23:15.146518   35125 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:15.146653   35125 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:23:15.146815   35125 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:23:15.146951   35125 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:23:15.147079   35125 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:23:15.230149   35125 ssh_runner.go:195] Run: systemctl --version
	I0729 17:23:15.236546   35125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:15.253306   35125 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:23:15.253335   35125 api_server.go:166] Checking apiserver status ...
	I0729 17:23:15.253363   35125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:23:15.268652   35125 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup
	W0729 17:23:15.278477   35125 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:23:15.278525   35125 ssh_runner.go:195] Run: ls
	I0729 17:23:15.282875   35125 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:23:15.287039   35125 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:23:15.287061   35125 status.go:422] ha-900414 apiserver status = Running (err=<nil>)
	I0729 17:23:15.287070   35125 status.go:257] ha-900414 status: &{Name:ha-900414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:15.287085   35125 status.go:255] checking status of ha-900414-m02 ...
	I0729 17:23:15.287367   35125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:15.287398   35125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:15.302331   35125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42657
	I0729 17:23:15.302822   35125 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:15.303289   35125 main.go:141] libmachine: Using API Version  1
	I0729 17:23:15.303311   35125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:15.303618   35125 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:15.303781   35125 main.go:141] libmachine: (ha-900414-m02) Calling .GetState
	I0729 17:23:15.305270   35125 status.go:330] ha-900414-m02 host status = "Running" (err=<nil>)
	I0729 17:23:15.305289   35125 host.go:66] Checking if "ha-900414-m02" exists ...
	I0729 17:23:15.305709   35125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:15.305750   35125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:15.320524   35125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40743
	I0729 17:23:15.320915   35125 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:15.321351   35125 main.go:141] libmachine: Using API Version  1
	I0729 17:23:15.321368   35125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:15.321620   35125 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:15.321776   35125 main.go:141] libmachine: (ha-900414-m02) Calling .GetIP
	I0729 17:23:15.324337   35125 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:15.324745   35125 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:23:15.324767   35125 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:15.324929   35125 host.go:66] Checking if "ha-900414-m02" exists ...
	I0729 17:23:15.325256   35125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:15.325291   35125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:15.340068   35125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44199
	I0729 17:23:15.340394   35125 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:15.340783   35125 main.go:141] libmachine: Using API Version  1
	I0729 17:23:15.340804   35125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:15.341114   35125 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:15.341274   35125 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:23:15.341441   35125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:15.341464   35125 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:23:15.344201   35125 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:15.344561   35125 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:23:15.344591   35125 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:15.344752   35125 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:23:15.344903   35125 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:23:15.345044   35125 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:23:15.345172   35125 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa Username:docker}
	W0729 17:23:16.402654   35125 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	I0729 17:23:16.402708   35125 retry.go:31] will retry after 176.55434ms: dial tcp 192.168.39.111:22: connect: no route to host
	W0729 17:23:19.474684   35125 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	W0729 17:23:19.474786   35125 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	E0729 17:23:19.474808   35125 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0729 17:23:19.474819   35125 status.go:257] ha-900414-m02 status: &{Name:ha-900414-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 17:23:19.474846   35125 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0729 17:23:19.474859   35125 status.go:255] checking status of ha-900414-m03 ...
	I0729 17:23:19.475433   35125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:19.475487   35125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:19.490401   35125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40005
	I0729 17:23:19.490825   35125 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:19.491269   35125 main.go:141] libmachine: Using API Version  1
	I0729 17:23:19.491285   35125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:19.491589   35125 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:19.491748   35125 main.go:141] libmachine: (ha-900414-m03) Calling .GetState
	I0729 17:23:19.493369   35125 status.go:330] ha-900414-m03 host status = "Running" (err=<nil>)
	I0729 17:23:19.493384   35125 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:23:19.493657   35125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:19.493693   35125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:19.509022   35125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40425
	I0729 17:23:19.509454   35125 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:19.509936   35125 main.go:141] libmachine: Using API Version  1
	I0729 17:23:19.509960   35125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:19.510234   35125 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:19.510462   35125 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:23:19.513306   35125 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:19.513686   35125 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:23:19.513720   35125 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:19.513831   35125 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:23:19.514310   35125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:19.514355   35125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:19.529878   35125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43299
	I0729 17:23:19.530331   35125 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:19.530856   35125 main.go:141] libmachine: Using API Version  1
	I0729 17:23:19.530877   35125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:19.531158   35125 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:19.531322   35125 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:23:19.531523   35125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:19.531542   35125 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:23:19.534101   35125 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:19.534526   35125 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:23:19.534566   35125 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:19.534656   35125 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:23:19.534811   35125 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:23:19.534955   35125 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:23:19.535057   35125 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:23:19.618100   35125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:19.632728   35125 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:23:19.632751   35125 api_server.go:166] Checking apiserver status ...
	I0729 17:23:19.632801   35125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:23:19.647543   35125 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup
	W0729 17:23:19.657828   35125 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:23:19.657883   35125 ssh_runner.go:195] Run: ls
	I0729 17:23:19.662743   35125 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:23:19.668907   35125 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:23:19.668932   35125 status.go:422] ha-900414-m03 apiserver status = Running (err=<nil>)
	I0729 17:23:19.668943   35125 status.go:257] ha-900414-m03 status: &{Name:ha-900414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:19.668961   35125 status.go:255] checking status of ha-900414-m04 ...
	I0729 17:23:19.669351   35125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:19.669391   35125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:19.684602   35125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33189
	I0729 17:23:19.685039   35125 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:19.685505   35125 main.go:141] libmachine: Using API Version  1
	I0729 17:23:19.685536   35125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:19.685854   35125 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:19.686025   35125 main.go:141] libmachine: (ha-900414-m04) Calling .GetState
	I0729 17:23:19.687586   35125 status.go:330] ha-900414-m04 host status = "Running" (err=<nil>)
	I0729 17:23:19.687603   35125 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:23:19.687956   35125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:19.687998   35125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:19.702291   35125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33379
	I0729 17:23:19.702728   35125 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:19.703179   35125 main.go:141] libmachine: Using API Version  1
	I0729 17:23:19.703202   35125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:19.703501   35125 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:19.703667   35125 main.go:141] libmachine: (ha-900414-m04) Calling .GetIP
	I0729 17:23:19.706319   35125 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:19.706735   35125 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:23:19.706764   35125 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:19.706892   35125 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:23:19.707252   35125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:19.707285   35125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:19.721297   35125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0729 17:23:19.721645   35125 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:19.722052   35125 main.go:141] libmachine: Using API Version  1
	I0729 17:23:19.722069   35125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:19.722350   35125 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:19.722565   35125 main.go:141] libmachine: (ha-900414-m04) Calling .DriverName
	I0729 17:23:19.722749   35125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:19.722773   35125 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHHostname
	I0729 17:23:19.725262   35125 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:19.725641   35125 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:23:19.725659   35125 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:19.725801   35125 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHPort
	I0729 17:23:19.725968   35125 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHKeyPath
	I0729 17:23:19.726127   35125 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHUsername
	I0729 17:23:19.726259   35125 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m04/id_rsa Username:docker}
	I0729 17:23:19.805502   35125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:19.820196   35125 status.go:257] ha-900414-m04 status: &{Name:ha-900414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr: exit status 3 (4.346332032s)

                                                
                                                
-- stdout --
	ha-900414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-900414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:23:21.782111   35242 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:23:21.782209   35242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:23:21.782214   35242 out.go:304] Setting ErrFile to fd 2...
	I0729 17:23:21.782218   35242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:23:21.782394   35242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:23:21.782543   35242 out.go:298] Setting JSON to false
	I0729 17:23:21.782566   35242 mustload.go:65] Loading cluster: ha-900414
	I0729 17:23:21.782599   35242 notify.go:220] Checking for updates...
	I0729 17:23:21.782902   35242 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:23:21.782916   35242 status.go:255] checking status of ha-900414 ...
	I0729 17:23:21.783336   35242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:21.783388   35242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:21.802749   35242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I0729 17:23:21.803206   35242 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:21.803893   35242 main.go:141] libmachine: Using API Version  1
	I0729 17:23:21.803921   35242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:21.804221   35242 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:21.804422   35242 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:23:21.806084   35242 status.go:330] ha-900414 host status = "Running" (err=<nil>)
	I0729 17:23:21.806102   35242 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:23:21.806478   35242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:21.806514   35242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:21.821987   35242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
	I0729 17:23:21.822464   35242 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:21.822960   35242 main.go:141] libmachine: Using API Version  1
	I0729 17:23:21.822990   35242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:21.823281   35242 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:21.823454   35242 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:23:21.825883   35242 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:21.826403   35242 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:23:21.826442   35242 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:21.826564   35242 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:23:21.826880   35242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:21.826926   35242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:21.842412   35242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32871
	I0729 17:23:21.842808   35242 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:21.843361   35242 main.go:141] libmachine: Using API Version  1
	I0729 17:23:21.843385   35242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:21.843691   35242 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:21.843881   35242 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:23:21.844087   35242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:21.844126   35242 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:23:21.847182   35242 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:21.847540   35242 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:23:21.847566   35242 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:21.847728   35242 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:23:21.847914   35242 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:23:21.848055   35242 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:23:21.848222   35242 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:23:21.930434   35242 ssh_runner.go:195] Run: systemctl --version
	I0729 17:23:21.936762   35242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:21.951569   35242 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:23:21.951598   35242 api_server.go:166] Checking apiserver status ...
	I0729 17:23:21.951632   35242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:23:21.966452   35242 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup
	W0729 17:23:21.976565   35242 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:23:21.976610   35242 ssh_runner.go:195] Run: ls
	I0729 17:23:21.980608   35242 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:23:21.984713   35242 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:23:21.984732   35242 status.go:422] ha-900414 apiserver status = Running (err=<nil>)
	I0729 17:23:21.984741   35242 status.go:257] ha-900414 status: &{Name:ha-900414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:21.984756   35242 status.go:255] checking status of ha-900414-m02 ...
	I0729 17:23:21.985032   35242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:21.985060   35242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:22.000525   35242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35185
	I0729 17:23:22.000923   35242 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:22.001389   35242 main.go:141] libmachine: Using API Version  1
	I0729 17:23:22.001405   35242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:22.001732   35242 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:22.001915   35242 main.go:141] libmachine: (ha-900414-m02) Calling .GetState
	I0729 17:23:22.003484   35242 status.go:330] ha-900414-m02 host status = "Running" (err=<nil>)
	I0729 17:23:22.003501   35242 host.go:66] Checking if "ha-900414-m02" exists ...
	I0729 17:23:22.003811   35242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:22.003844   35242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:22.018372   35242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
	I0729 17:23:22.018816   35242 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:22.019303   35242 main.go:141] libmachine: Using API Version  1
	I0729 17:23:22.019334   35242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:22.019662   35242 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:22.019812   35242 main.go:141] libmachine: (ha-900414-m02) Calling .GetIP
	I0729 17:23:22.022658   35242 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:22.023077   35242 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:23:22.023102   35242 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:22.023250   35242 host.go:66] Checking if "ha-900414-m02" exists ...
	I0729 17:23:22.023569   35242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:22.023600   35242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:22.037689   35242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0729 17:23:22.038066   35242 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:22.038549   35242 main.go:141] libmachine: Using API Version  1
	I0729 17:23:22.038568   35242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:22.038849   35242 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:22.039054   35242 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:23:22.039216   35242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:22.039234   35242 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:23:22.041837   35242 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:22.042238   35242 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:23:22.042265   35242 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:22.042396   35242 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:23:22.042562   35242 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:23:22.042683   35242 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:23:22.042823   35242 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa Username:docker}
	W0729 17:23:22.546623   35242 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	I0729 17:23:22.546676   35242 retry.go:31] will retry after 128.185199ms: dial tcp 192.168.39.111:22: connect: no route to host
	W0729 17:23:25.746587   35242 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	W0729 17:23:25.746688   35242 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	E0729 17:23:25.746711   35242 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0729 17:23:25.746722   35242 status.go:257] ha-900414-m02 status: &{Name:ha-900414-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 17:23:25.746749   35242 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0729 17:23:25.746773   35242 status.go:255] checking status of ha-900414-m03 ...
	I0729 17:23:25.747071   35242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:25.747117   35242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:25.761580   35242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0729 17:23:25.761964   35242 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:25.762419   35242 main.go:141] libmachine: Using API Version  1
	I0729 17:23:25.762438   35242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:25.762764   35242 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:25.762940   35242 main.go:141] libmachine: (ha-900414-m03) Calling .GetState
	I0729 17:23:25.764373   35242 status.go:330] ha-900414-m03 host status = "Running" (err=<nil>)
	I0729 17:23:25.764390   35242 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:23:25.764771   35242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:25.764819   35242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:25.778753   35242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33519
	I0729 17:23:25.779177   35242 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:25.779702   35242 main.go:141] libmachine: Using API Version  1
	I0729 17:23:25.779726   35242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:25.780062   35242 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:25.780277   35242 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:23:25.782905   35242 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:25.783305   35242 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:23:25.783345   35242 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:25.783443   35242 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:23:25.783747   35242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:25.783778   35242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:25.798117   35242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33475
	I0729 17:23:25.798475   35242 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:25.798870   35242 main.go:141] libmachine: Using API Version  1
	I0729 17:23:25.798895   35242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:25.799207   35242 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:25.799389   35242 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:23:25.799555   35242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:25.799582   35242 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:23:25.802237   35242 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:25.802618   35242 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:23:25.802643   35242 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:25.802779   35242 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:23:25.802935   35242 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:23:25.803055   35242 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:23:25.803202   35242 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:23:25.886179   35242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:25.900943   35242 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:23:25.900966   35242 api_server.go:166] Checking apiserver status ...
	I0729 17:23:25.900993   35242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:23:25.914924   35242 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup
	W0729 17:23:25.925287   35242 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:23:25.925343   35242 ssh_runner.go:195] Run: ls
	I0729 17:23:25.930049   35242 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:23:25.935525   35242 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:23:25.935548   35242 status.go:422] ha-900414-m03 apiserver status = Running (err=<nil>)
	I0729 17:23:25.935559   35242 status.go:257] ha-900414-m03 status: &{Name:ha-900414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:25.935578   35242 status.go:255] checking status of ha-900414-m04 ...
	I0729 17:23:25.935942   35242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:25.935978   35242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:25.950520   35242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46521
	I0729 17:23:25.950873   35242 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:25.951324   35242 main.go:141] libmachine: Using API Version  1
	I0729 17:23:25.951343   35242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:25.951610   35242 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:25.951802   35242 main.go:141] libmachine: (ha-900414-m04) Calling .GetState
	I0729 17:23:25.953392   35242 status.go:330] ha-900414-m04 host status = "Running" (err=<nil>)
	I0729 17:23:25.953408   35242 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:23:25.953688   35242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:25.953724   35242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:25.968866   35242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I0729 17:23:25.969234   35242 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:25.969640   35242 main.go:141] libmachine: Using API Version  1
	I0729 17:23:25.969657   35242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:25.969995   35242 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:25.970188   35242 main.go:141] libmachine: (ha-900414-m04) Calling .GetIP
	I0729 17:23:25.972863   35242 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:25.973267   35242 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:23:25.973300   35242 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:25.973457   35242 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:23:25.973746   35242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:25.973790   35242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:25.988015   35242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40415
	I0729 17:23:25.988478   35242 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:25.988949   35242 main.go:141] libmachine: Using API Version  1
	I0729 17:23:25.988969   35242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:25.989367   35242 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:25.989574   35242 main.go:141] libmachine: (ha-900414-m04) Calling .DriverName
	I0729 17:23:25.989782   35242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:25.989809   35242 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHHostname
	I0729 17:23:25.992320   35242 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:25.992686   35242 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:23:25.992711   35242 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:25.992847   35242 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHPort
	I0729 17:23:25.993042   35242 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHKeyPath
	I0729 17:23:25.993169   35242 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHUsername
	I0729 17:23:25.993296   35242 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m04/id_rsa Username:docker}
	I0729 17:23:26.073513   35242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:26.087552   35242 status.go:257] ha-900414-m04 status: &{Name:ha-900414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr
E0729 17:23:29.676655   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr: exit status 3 (3.700316166s)

                                                
                                                
-- stdout --
	ha-900414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-900414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:23:29.414429   35342 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:23:29.414529   35342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:23:29.414538   35342 out.go:304] Setting ErrFile to fd 2...
	I0729 17:23:29.414542   35342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:23:29.414716   35342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:23:29.414860   35342 out.go:298] Setting JSON to false
	I0729 17:23:29.414883   35342 mustload.go:65] Loading cluster: ha-900414
	I0729 17:23:29.414932   35342 notify.go:220] Checking for updates...
	I0729 17:23:29.415417   35342 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:23:29.415436   35342 status.go:255] checking status of ha-900414 ...
	I0729 17:23:29.415852   35342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:29.415891   35342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:29.436916   35342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0729 17:23:29.437286   35342 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:29.437999   35342 main.go:141] libmachine: Using API Version  1
	I0729 17:23:29.438028   35342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:29.438341   35342 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:29.438553   35342 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:23:29.440050   35342 status.go:330] ha-900414 host status = "Running" (err=<nil>)
	I0729 17:23:29.440067   35342 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:23:29.440347   35342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:29.440378   35342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:29.454988   35342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46799
	I0729 17:23:29.455350   35342 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:29.455769   35342 main.go:141] libmachine: Using API Version  1
	I0729 17:23:29.455787   35342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:29.456044   35342 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:29.456228   35342 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:23:29.458855   35342 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:29.459240   35342 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:23:29.459266   35342 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:29.459391   35342 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:23:29.459651   35342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:29.459688   35342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:29.474191   35342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45589
	I0729 17:23:29.474626   35342 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:29.475005   35342 main.go:141] libmachine: Using API Version  1
	I0729 17:23:29.475026   35342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:29.475283   35342 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:29.475446   35342 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:23:29.475654   35342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:29.475677   35342 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:23:29.478320   35342 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:29.478806   35342 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:23:29.478836   35342 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:29.478990   35342 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:23:29.479149   35342 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:23:29.479288   35342 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:23:29.479410   35342 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:23:29.558339   35342 ssh_runner.go:195] Run: systemctl --version
	I0729 17:23:29.564663   35342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:29.579833   35342 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:23:29.579864   35342 api_server.go:166] Checking apiserver status ...
	I0729 17:23:29.579894   35342 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:23:29.593091   35342 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup
	W0729 17:23:29.603190   35342 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:23:29.603236   35342 ssh_runner.go:195] Run: ls
	I0729 17:23:29.607217   35342 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:23:29.611348   35342 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:23:29.611368   35342 status.go:422] ha-900414 apiserver status = Running (err=<nil>)
	I0729 17:23:29.611380   35342 status.go:257] ha-900414 status: &{Name:ha-900414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:29.611404   35342 status.go:255] checking status of ha-900414-m02 ...
	I0729 17:23:29.611680   35342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:29.611718   35342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:29.627141   35342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42449
	I0729 17:23:29.627528   35342 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:29.628031   35342 main.go:141] libmachine: Using API Version  1
	I0729 17:23:29.628051   35342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:29.628346   35342 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:29.628544   35342 main.go:141] libmachine: (ha-900414-m02) Calling .GetState
	I0729 17:23:29.630098   35342 status.go:330] ha-900414-m02 host status = "Running" (err=<nil>)
	I0729 17:23:29.630125   35342 host.go:66] Checking if "ha-900414-m02" exists ...
	I0729 17:23:29.630415   35342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:29.630451   35342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:29.644756   35342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I0729 17:23:29.645173   35342 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:29.645711   35342 main.go:141] libmachine: Using API Version  1
	I0729 17:23:29.645734   35342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:29.646094   35342 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:29.646294   35342 main.go:141] libmachine: (ha-900414-m02) Calling .GetIP
	I0729 17:23:29.649181   35342 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:29.649589   35342 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:23:29.649617   35342 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:29.649763   35342 host.go:66] Checking if "ha-900414-m02" exists ...
	I0729 17:23:29.650178   35342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:29.650226   35342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:29.664948   35342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36105
	I0729 17:23:29.665297   35342 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:29.665730   35342 main.go:141] libmachine: Using API Version  1
	I0729 17:23:29.665765   35342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:29.666096   35342 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:29.666297   35342 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:23:29.666474   35342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:29.666496   35342 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:23:29.669363   35342 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:29.669746   35342 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:23:29.669771   35342 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:29.669915   35342 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:23:29.670066   35342 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:23:29.670217   35342 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:23:29.670353   35342 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa Username:docker}
	W0729 17:23:32.722589   35342 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	W0729 17:23:32.722682   35342 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	E0729 17:23:32.722704   35342 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0729 17:23:32.722718   35342 status.go:257] ha-900414-m02 status: &{Name:ha-900414-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 17:23:32.722762   35342 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0729 17:23:32.722778   35342 status.go:255] checking status of ha-900414-m03 ...
	I0729 17:23:32.723222   35342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:32.723284   35342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:32.738396   35342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35677
	I0729 17:23:32.738787   35342 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:32.739386   35342 main.go:141] libmachine: Using API Version  1
	I0729 17:23:32.739411   35342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:32.739716   35342 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:32.739901   35342 main.go:141] libmachine: (ha-900414-m03) Calling .GetState
	I0729 17:23:32.741500   35342 status.go:330] ha-900414-m03 host status = "Running" (err=<nil>)
	I0729 17:23:32.741519   35342 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:23:32.741929   35342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:32.742002   35342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:32.756289   35342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34979
	I0729 17:23:32.756723   35342 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:32.757261   35342 main.go:141] libmachine: Using API Version  1
	I0729 17:23:32.757283   35342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:32.757559   35342 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:32.757747   35342 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:23:32.760438   35342 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:32.760887   35342 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:23:32.760925   35342 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:32.761082   35342 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:23:32.761453   35342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:32.761487   35342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:32.776716   35342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34441
	I0729 17:23:32.777222   35342 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:32.777669   35342 main.go:141] libmachine: Using API Version  1
	I0729 17:23:32.777682   35342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:32.777979   35342 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:32.778167   35342 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:23:32.778376   35342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:32.778392   35342 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:23:32.780849   35342 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:32.781321   35342 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:23:32.781349   35342 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:32.781476   35342 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:23:32.781647   35342 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:23:32.781765   35342 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:23:32.781895   35342 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:23:32.862043   35342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:32.885243   35342 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:23:32.885268   35342 api_server.go:166] Checking apiserver status ...
	I0729 17:23:32.885297   35342 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:23:32.897781   35342 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup
	W0729 17:23:32.906652   35342 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:23:32.906699   35342 ssh_runner.go:195] Run: ls
	I0729 17:23:32.910853   35342 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:23:32.917763   35342 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:23:32.917784   35342 status.go:422] ha-900414-m03 apiserver status = Running (err=<nil>)
	I0729 17:23:32.917794   35342 status.go:257] ha-900414-m03 status: &{Name:ha-900414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:32.917820   35342 status.go:255] checking status of ha-900414-m04 ...
	I0729 17:23:32.918124   35342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:32.918164   35342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:32.934289   35342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34809
	I0729 17:23:32.934645   35342 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:32.935158   35342 main.go:141] libmachine: Using API Version  1
	I0729 17:23:32.935194   35342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:32.935579   35342 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:32.935765   35342 main.go:141] libmachine: (ha-900414-m04) Calling .GetState
	I0729 17:23:32.937415   35342 status.go:330] ha-900414-m04 host status = "Running" (err=<nil>)
	I0729 17:23:32.937431   35342 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:23:32.937707   35342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:32.937736   35342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:32.952467   35342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I0729 17:23:32.952876   35342 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:32.953298   35342 main.go:141] libmachine: Using API Version  1
	I0729 17:23:32.953318   35342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:32.953626   35342 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:32.953804   35342 main.go:141] libmachine: (ha-900414-m04) Calling .GetIP
	I0729 17:23:32.956815   35342 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:32.957239   35342 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:23:32.957268   35342 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:32.957438   35342 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:23:32.957715   35342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:32.957745   35342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:32.972154   35342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39731
	I0729 17:23:32.972478   35342 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:32.972930   35342 main.go:141] libmachine: Using API Version  1
	I0729 17:23:32.972948   35342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:32.973218   35342 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:32.973380   35342 main.go:141] libmachine: (ha-900414-m04) Calling .DriverName
	I0729 17:23:32.973549   35342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:32.973568   35342 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHHostname
	I0729 17:23:32.976226   35342 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:32.976698   35342 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:23:32.976723   35342 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:32.976897   35342 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHPort
	I0729 17:23:32.977066   35342 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHKeyPath
	I0729 17:23:32.977202   35342 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHUsername
	I0729 17:23:32.977344   35342 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m04/id_rsa Username:docker}
	I0729 17:23:33.061439   35342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:33.075801   35342 status.go:257] ha-900414-m04 status: &{Name:ha-900414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr: exit status 3 (3.721826596s)

                                                
                                                
-- stdout --
	ha-900414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-900414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:23:37.328662   35458 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:23:37.328783   35458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:23:37.328793   35458 out.go:304] Setting ErrFile to fd 2...
	I0729 17:23:37.328797   35458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:23:37.328985   35458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:23:37.329185   35458 out.go:298] Setting JSON to false
	I0729 17:23:37.329211   35458 mustload.go:65] Loading cluster: ha-900414
	I0729 17:23:37.329326   35458 notify.go:220] Checking for updates...
	I0729 17:23:37.329667   35458 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:23:37.329682   35458 status.go:255] checking status of ha-900414 ...
	I0729 17:23:37.330062   35458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:37.330129   35458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:37.345439   35458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I0729 17:23:37.345822   35458 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:37.346477   35458 main.go:141] libmachine: Using API Version  1
	I0729 17:23:37.346501   35458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:37.346813   35458 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:37.347012   35458 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:23:37.348568   35458 status.go:330] ha-900414 host status = "Running" (err=<nil>)
	I0729 17:23:37.348585   35458 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:23:37.348979   35458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:37.349019   35458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:37.363945   35458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43323
	I0729 17:23:37.364317   35458 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:37.364716   35458 main.go:141] libmachine: Using API Version  1
	I0729 17:23:37.364733   35458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:37.365067   35458 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:37.365245   35458 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:23:37.368178   35458 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:37.368697   35458 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:23:37.368727   35458 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:37.368878   35458 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:23:37.369243   35458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:37.369300   35458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:37.385115   35458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38919
	I0729 17:23:37.385534   35458 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:37.386064   35458 main.go:141] libmachine: Using API Version  1
	I0729 17:23:37.386082   35458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:37.386443   35458 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:37.386636   35458 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:23:37.386849   35458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:37.386870   35458 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:23:37.389532   35458 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:37.389904   35458 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:23:37.389933   35458 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:37.390141   35458 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:23:37.390319   35458 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:23:37.390475   35458 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:23:37.390603   35458 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:23:37.471114   35458 ssh_runner.go:195] Run: systemctl --version
	I0729 17:23:37.478107   35458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:37.494392   35458 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:23:37.494412   35458 api_server.go:166] Checking apiserver status ...
	I0729 17:23:37.494439   35458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:23:37.510445   35458 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup
	W0729 17:23:37.520217   35458 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:23:37.520254   35458 ssh_runner.go:195] Run: ls
	I0729 17:23:37.525202   35458 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:23:37.529138   35458 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:23:37.529160   35458 status.go:422] ha-900414 apiserver status = Running (err=<nil>)
	I0729 17:23:37.529173   35458 status.go:257] ha-900414 status: &{Name:ha-900414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:37.529208   35458 status.go:255] checking status of ha-900414-m02 ...
	I0729 17:23:37.529511   35458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:37.529549   35458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:37.544057   35458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38755
	I0729 17:23:37.544477   35458 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:37.544855   35458 main.go:141] libmachine: Using API Version  1
	I0729 17:23:37.544887   35458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:37.545186   35458 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:37.545389   35458 main.go:141] libmachine: (ha-900414-m02) Calling .GetState
	I0729 17:23:37.546986   35458 status.go:330] ha-900414-m02 host status = "Running" (err=<nil>)
	I0729 17:23:37.547003   35458 host.go:66] Checking if "ha-900414-m02" exists ...
	I0729 17:23:37.547285   35458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:37.547326   35458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:37.562238   35458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36125
	I0729 17:23:37.562657   35458 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:37.563106   35458 main.go:141] libmachine: Using API Version  1
	I0729 17:23:37.563127   35458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:37.563490   35458 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:37.563684   35458 main.go:141] libmachine: (ha-900414-m02) Calling .GetIP
	I0729 17:23:37.566605   35458 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:37.567094   35458 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:23:37.567116   35458 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:37.567265   35458 host.go:66] Checking if "ha-900414-m02" exists ...
	I0729 17:23:37.567604   35458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:37.567660   35458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:37.581979   35458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38753
	I0729 17:23:37.582326   35458 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:37.582838   35458 main.go:141] libmachine: Using API Version  1
	I0729 17:23:37.582861   35458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:37.583168   35458 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:37.583350   35458 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:23:37.583489   35458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:37.583505   35458 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:23:37.585962   35458 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:37.586375   35458 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:23:37.586395   35458 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:23:37.586574   35458 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:23:37.586734   35458 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:23:37.586873   35458 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:23:37.587049   35458 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa Username:docker}
	W0729 17:23:40.658599   35458 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	W0729 17:23:40.658692   35458 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	E0729 17:23:40.658716   35458 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0729 17:23:40.658728   35458 status.go:257] ha-900414-m02 status: &{Name:ha-900414-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 17:23:40.658750   35458 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0729 17:23:40.658763   35458 status.go:255] checking status of ha-900414-m03 ...
	I0729 17:23:40.659169   35458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:40.659237   35458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:40.674298   35458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44933
	I0729 17:23:40.674687   35458 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:40.675150   35458 main.go:141] libmachine: Using API Version  1
	I0729 17:23:40.675174   35458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:40.675484   35458 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:40.675723   35458 main.go:141] libmachine: (ha-900414-m03) Calling .GetState
	I0729 17:23:40.677041   35458 status.go:330] ha-900414-m03 host status = "Running" (err=<nil>)
	I0729 17:23:40.677055   35458 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:23:40.677349   35458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:40.677386   35458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:40.692340   35458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43989
	I0729 17:23:40.692710   35458 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:40.693191   35458 main.go:141] libmachine: Using API Version  1
	I0729 17:23:40.693211   35458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:40.693516   35458 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:40.693711   35458 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:23:40.696244   35458 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:40.696575   35458 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:23:40.696599   35458 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:40.696718   35458 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:23:40.697025   35458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:40.697056   35458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:40.712025   35458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37283
	I0729 17:23:40.712478   35458 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:40.712911   35458 main.go:141] libmachine: Using API Version  1
	I0729 17:23:40.712941   35458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:40.713233   35458 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:40.713380   35458 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:23:40.713580   35458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:40.713599   35458 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:23:40.716305   35458 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:40.716699   35458 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:23:40.716733   35458 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:40.716890   35458 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:23:40.717078   35458 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:23:40.717234   35458 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:23:40.717347   35458 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:23:40.798607   35458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:40.820538   35458 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:23:40.820569   35458 api_server.go:166] Checking apiserver status ...
	I0729 17:23:40.820608   35458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:23:40.837903   35458 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup
	W0729 17:23:40.848908   35458 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:23:40.848955   35458 ssh_runner.go:195] Run: ls
	I0729 17:23:40.853002   35458 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:23:40.857161   35458 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:23:40.857180   35458 status.go:422] ha-900414-m03 apiserver status = Running (err=<nil>)
	I0729 17:23:40.857187   35458 status.go:257] ha-900414-m03 status: &{Name:ha-900414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:40.857201   35458 status.go:255] checking status of ha-900414-m04 ...
	I0729 17:23:40.857454   35458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:40.857481   35458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:40.871950   35458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41597
	I0729 17:23:40.872320   35458 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:40.872805   35458 main.go:141] libmachine: Using API Version  1
	I0729 17:23:40.872824   35458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:40.873148   35458 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:40.873362   35458 main.go:141] libmachine: (ha-900414-m04) Calling .GetState
	I0729 17:23:40.874869   35458 status.go:330] ha-900414-m04 host status = "Running" (err=<nil>)
	I0729 17:23:40.874884   35458 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:23:40.875194   35458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:40.875228   35458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:40.889271   35458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42367
	I0729 17:23:40.889563   35458 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:40.890036   35458 main.go:141] libmachine: Using API Version  1
	I0729 17:23:40.890068   35458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:40.890351   35458 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:40.890547   35458 main.go:141] libmachine: (ha-900414-m04) Calling .GetIP
	I0729 17:23:40.892990   35458 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:40.893451   35458 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:23:40.893485   35458 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:40.893586   35458 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:23:40.893986   35458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:40.894028   35458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:40.907746   35458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I0729 17:23:40.908066   35458 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:40.908454   35458 main.go:141] libmachine: Using API Version  1
	I0729 17:23:40.908476   35458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:40.908790   35458 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:40.909005   35458 main.go:141] libmachine: (ha-900414-m04) Calling .DriverName
	I0729 17:23:40.909213   35458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:40.909234   35458 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHHostname
	I0729 17:23:40.911650   35458 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:40.912113   35458 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:23:40.912142   35458 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:40.912341   35458 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHPort
	I0729 17:23:40.912528   35458 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHKeyPath
	I0729 17:23:40.912661   35458 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHUsername
	I0729 17:23:40.912795   35458 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m04/id_rsa Username:docker}
	I0729 17:23:40.993484   35458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:41.007903   35458 status.go:257] ha-900414-m04 status: &{Name:ha-900414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr: exit status 7 (621.616847ms)

                                                
                                                
-- stdout --
	ha-900414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-900414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:23:47.168740   35599 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:23:47.168866   35599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:23:47.168876   35599 out.go:304] Setting ErrFile to fd 2...
	I0729 17:23:47.168882   35599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:23:47.169148   35599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:23:47.169325   35599 out.go:298] Setting JSON to false
	I0729 17:23:47.169349   35599 mustload.go:65] Loading cluster: ha-900414
	I0729 17:23:47.169478   35599 notify.go:220] Checking for updates...
	I0729 17:23:47.169764   35599 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:23:47.169784   35599 status.go:255] checking status of ha-900414 ...
	I0729 17:23:47.170244   35599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:47.170295   35599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:47.189893   35599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0729 17:23:47.190292   35599 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:47.190846   35599 main.go:141] libmachine: Using API Version  1
	I0729 17:23:47.190867   35599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:47.191258   35599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:47.191426   35599 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:23:47.193093   35599 status.go:330] ha-900414 host status = "Running" (err=<nil>)
	I0729 17:23:47.193111   35599 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:23:47.193387   35599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:47.193422   35599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:47.208979   35599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42279
	I0729 17:23:47.209415   35599 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:47.209834   35599 main.go:141] libmachine: Using API Version  1
	I0729 17:23:47.209855   35599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:47.210157   35599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:47.210327   35599 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:23:47.213107   35599 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:47.213466   35599 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:23:47.213505   35599 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:47.213582   35599 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:23:47.213968   35599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:47.214012   35599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:47.229193   35599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43613
	I0729 17:23:47.229677   35599 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:47.230136   35599 main.go:141] libmachine: Using API Version  1
	I0729 17:23:47.230167   35599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:47.230475   35599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:47.230645   35599 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:23:47.230846   35599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:47.230873   35599 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:23:47.233415   35599 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:47.233761   35599 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:23:47.233799   35599 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:47.233977   35599 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:23:47.234139   35599 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:23:47.234289   35599 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:23:47.234437   35599 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:23:47.322390   35599 ssh_runner.go:195] Run: systemctl --version
	I0729 17:23:47.328175   35599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:47.342933   35599 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:23:47.342957   35599 api_server.go:166] Checking apiserver status ...
	I0729 17:23:47.342983   35599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:23:47.358053   35599 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup
	W0729 17:23:47.367738   35599 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:23:47.367790   35599 ssh_runner.go:195] Run: ls
	I0729 17:23:47.372612   35599 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:23:47.378452   35599 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:23:47.378475   35599 status.go:422] ha-900414 apiserver status = Running (err=<nil>)
	I0729 17:23:47.378488   35599 status.go:257] ha-900414 status: &{Name:ha-900414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:47.378511   35599 status.go:255] checking status of ha-900414-m02 ...
	I0729 17:23:47.378798   35599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:47.378841   35599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:47.393807   35599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I0729 17:23:47.394192   35599 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:47.394696   35599 main.go:141] libmachine: Using API Version  1
	I0729 17:23:47.394721   35599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:47.394986   35599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:47.395173   35599 main.go:141] libmachine: (ha-900414-m02) Calling .GetState
	I0729 17:23:47.396794   35599 status.go:330] ha-900414-m02 host status = "Stopped" (err=<nil>)
	I0729 17:23:47.396810   35599 status.go:343] host is not running, skipping remaining checks
	I0729 17:23:47.396818   35599 status.go:257] ha-900414-m02 status: &{Name:ha-900414-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:47.396838   35599 status.go:255] checking status of ha-900414-m03 ...
	I0729 17:23:47.397242   35599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:47.397307   35599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:47.413496   35599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40641
	I0729 17:23:47.413851   35599 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:47.414282   35599 main.go:141] libmachine: Using API Version  1
	I0729 17:23:47.414302   35599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:47.414671   35599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:47.414865   35599 main.go:141] libmachine: (ha-900414-m03) Calling .GetState
	I0729 17:23:47.416235   35599 status.go:330] ha-900414-m03 host status = "Running" (err=<nil>)
	I0729 17:23:47.416253   35599 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:23:47.416549   35599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:47.416589   35599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:47.434459   35599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43777
	I0729 17:23:47.434985   35599 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:47.435508   35599 main.go:141] libmachine: Using API Version  1
	I0729 17:23:47.435536   35599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:47.435846   35599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:47.436013   35599 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:23:47.439270   35599 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:47.439685   35599 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:23:47.439713   35599 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:47.439855   35599 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:23:47.440163   35599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:47.440204   35599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:47.456069   35599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46365
	I0729 17:23:47.456490   35599 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:47.456971   35599 main.go:141] libmachine: Using API Version  1
	I0729 17:23:47.456989   35599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:47.457263   35599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:47.457438   35599 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:23:47.457602   35599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:47.457618   35599 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:23:47.460425   35599 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:47.460875   35599 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:23:47.460902   35599 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:47.461016   35599 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:23:47.461213   35599 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:23:47.461364   35599 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:23:47.461516   35599 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:23:47.542522   35599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:47.559263   35599 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:23:47.559288   35599 api_server.go:166] Checking apiserver status ...
	I0729 17:23:47.559324   35599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:23:47.574214   35599 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup
	W0729 17:23:47.584774   35599 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:23:47.584833   35599 ssh_runner.go:195] Run: ls
	I0729 17:23:47.589631   35599 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:23:47.593863   35599 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:23:47.593904   35599 status.go:422] ha-900414-m03 apiserver status = Running (err=<nil>)
	I0729 17:23:47.593914   35599 status.go:257] ha-900414-m03 status: &{Name:ha-900414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:47.593935   35599 status.go:255] checking status of ha-900414-m04 ...
	I0729 17:23:47.594246   35599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:47.594306   35599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:47.610277   35599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34331
	I0729 17:23:47.610769   35599 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:47.611255   35599 main.go:141] libmachine: Using API Version  1
	I0729 17:23:47.611275   35599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:47.611585   35599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:47.611772   35599 main.go:141] libmachine: (ha-900414-m04) Calling .GetState
	I0729 17:23:47.613365   35599 status.go:330] ha-900414-m04 host status = "Running" (err=<nil>)
	I0729 17:23:47.613380   35599 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:23:47.613652   35599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:47.613681   35599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:47.628377   35599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34567
	I0729 17:23:47.628776   35599 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:47.629220   35599 main.go:141] libmachine: Using API Version  1
	I0729 17:23:47.629244   35599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:47.629549   35599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:47.629709   35599 main.go:141] libmachine: (ha-900414-m04) Calling .GetIP
	I0729 17:23:47.632152   35599 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:47.632580   35599 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:23:47.632609   35599 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:47.632701   35599 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:23:47.633030   35599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:47.633068   35599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:47.648043   35599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45947
	I0729 17:23:47.648467   35599 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:47.648869   35599 main.go:141] libmachine: Using API Version  1
	I0729 17:23:47.648902   35599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:47.649184   35599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:47.649334   35599 main.go:141] libmachine: (ha-900414-m04) Calling .DriverName
	I0729 17:23:47.649530   35599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:47.649553   35599 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHHostname
	I0729 17:23:47.652016   35599 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:47.652365   35599 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:23:47.652383   35599 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:47.652510   35599 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHPort
	I0729 17:23:47.652641   35599 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHKeyPath
	I0729 17:23:47.652787   35599 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHUsername
	I0729 17:23:47.652942   35599 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m04/id_rsa Username:docker}
	I0729 17:23:47.733880   35599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:47.749036   35599 status.go:257] ha-900414-m04 status: &{Name:ha-900414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr: exit status 7 (594.073718ms)

                                                
                                                
-- stdout --
	ha-900414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-900414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:23:55.006454   35687 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:23:55.006683   35687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:23:55.006690   35687 out.go:304] Setting ErrFile to fd 2...
	I0729 17:23:55.006695   35687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:23:55.006841   35687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:23:55.006988   35687 out.go:298] Setting JSON to false
	I0729 17:23:55.007010   35687 mustload.go:65] Loading cluster: ha-900414
	I0729 17:23:55.007042   35687 notify.go:220] Checking for updates...
	I0729 17:23:55.007372   35687 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:23:55.007385   35687 status.go:255] checking status of ha-900414 ...
	I0729 17:23:55.007744   35687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:55.007793   35687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:55.022271   35687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35091
	I0729 17:23:55.022655   35687 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:55.023271   35687 main.go:141] libmachine: Using API Version  1
	I0729 17:23:55.023290   35687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:55.023634   35687 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:55.023857   35687 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:23:55.025339   35687 status.go:330] ha-900414 host status = "Running" (err=<nil>)
	I0729 17:23:55.025356   35687 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:23:55.025630   35687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:55.025666   35687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:55.039958   35687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46665
	I0729 17:23:55.040319   35687 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:55.040666   35687 main.go:141] libmachine: Using API Version  1
	I0729 17:23:55.040682   35687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:55.040979   35687 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:55.041141   35687 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:23:55.043640   35687 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:55.044009   35687 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:23:55.044035   35687 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:55.044162   35687 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:23:55.044446   35687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:55.044475   35687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:55.058329   35687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I0729 17:23:55.058719   35687 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:55.059102   35687 main.go:141] libmachine: Using API Version  1
	I0729 17:23:55.059124   35687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:55.059399   35687 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:55.059560   35687 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:23:55.059732   35687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:55.059763   35687 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:23:55.062259   35687 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:55.062694   35687 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:23:55.062738   35687 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:23:55.062897   35687 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:23:55.063032   35687 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:23:55.063193   35687 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:23:55.063300   35687 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:23:55.141954   35687 ssh_runner.go:195] Run: systemctl --version
	I0729 17:23:55.147980   35687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:55.162948   35687 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:23:55.162975   35687 api_server.go:166] Checking apiserver status ...
	I0729 17:23:55.163012   35687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:23:55.176511   35687 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup
	W0729 17:23:55.185346   35687 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:23:55.185384   35687 ssh_runner.go:195] Run: ls
	I0729 17:23:55.189851   35687 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:23:55.195443   35687 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:23:55.195464   35687 status.go:422] ha-900414 apiserver status = Running (err=<nil>)
	I0729 17:23:55.195474   35687 status.go:257] ha-900414 status: &{Name:ha-900414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:55.195493   35687 status.go:255] checking status of ha-900414-m02 ...
	I0729 17:23:55.195798   35687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:55.195835   35687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:55.210779   35687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44975
	I0729 17:23:55.211165   35687 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:55.211587   35687 main.go:141] libmachine: Using API Version  1
	I0729 17:23:55.211610   35687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:55.211928   35687 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:55.212106   35687 main.go:141] libmachine: (ha-900414-m02) Calling .GetState
	I0729 17:23:55.213408   35687 status.go:330] ha-900414-m02 host status = "Stopped" (err=<nil>)
	I0729 17:23:55.213422   35687 status.go:343] host is not running, skipping remaining checks
	I0729 17:23:55.213430   35687 status.go:257] ha-900414-m02 status: &{Name:ha-900414-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:55.213448   35687 status.go:255] checking status of ha-900414-m03 ...
	I0729 17:23:55.213716   35687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:55.213769   35687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:55.228672   35687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0729 17:23:55.229079   35687 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:55.229493   35687 main.go:141] libmachine: Using API Version  1
	I0729 17:23:55.229513   35687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:55.229764   35687 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:55.229935   35687 main.go:141] libmachine: (ha-900414-m03) Calling .GetState
	I0729 17:23:55.231311   35687 status.go:330] ha-900414-m03 host status = "Running" (err=<nil>)
	I0729 17:23:55.231326   35687 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:23:55.231632   35687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:55.231671   35687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:55.245696   35687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44561
	I0729 17:23:55.246130   35687 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:55.246583   35687 main.go:141] libmachine: Using API Version  1
	I0729 17:23:55.246601   35687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:55.246882   35687 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:55.247065   35687 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:23:55.249712   35687 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:55.250177   35687 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:23:55.250207   35687 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:55.250331   35687 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:23:55.250653   35687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:55.250689   35687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:55.265437   35687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41365
	I0729 17:23:55.265841   35687 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:55.266284   35687 main.go:141] libmachine: Using API Version  1
	I0729 17:23:55.266315   35687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:55.266606   35687 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:55.266787   35687 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:23:55.266987   35687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:55.267004   35687 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:23:55.269790   35687 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:55.270211   35687 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:23:55.270235   35687 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:23:55.270343   35687 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:23:55.270519   35687 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:23:55.270667   35687 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:23:55.270783   35687 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:23:55.354213   35687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:55.371292   35687 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:23:55.371318   35687 api_server.go:166] Checking apiserver status ...
	I0729 17:23:55.371352   35687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:23:55.385397   35687 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup
	W0729 17:23:55.394832   35687 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:23:55.394872   35687 ssh_runner.go:195] Run: ls
	I0729 17:23:55.399589   35687 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:23:55.403732   35687 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:23:55.403748   35687 status.go:422] ha-900414-m03 apiserver status = Running (err=<nil>)
	I0729 17:23:55.403756   35687 status.go:257] ha-900414-m03 status: &{Name:ha-900414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:23:55.403768   35687 status.go:255] checking status of ha-900414-m04 ...
	I0729 17:23:55.404081   35687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:55.404128   35687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:55.418609   35687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35901
	I0729 17:23:55.419091   35687 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:55.419572   35687 main.go:141] libmachine: Using API Version  1
	I0729 17:23:55.419592   35687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:55.419902   35687 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:55.420076   35687 main.go:141] libmachine: (ha-900414-m04) Calling .GetState
	I0729 17:23:55.421486   35687 status.go:330] ha-900414-m04 host status = "Running" (err=<nil>)
	I0729 17:23:55.421504   35687 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:23:55.421844   35687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:55.421881   35687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:55.436401   35687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37061
	I0729 17:23:55.436733   35687 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:55.437144   35687 main.go:141] libmachine: Using API Version  1
	I0729 17:23:55.437163   35687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:55.437451   35687 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:55.437665   35687 main.go:141] libmachine: (ha-900414-m04) Calling .GetIP
	I0729 17:23:55.440259   35687 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:55.440641   35687 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:23:55.440665   35687 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:55.440800   35687 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:23:55.441072   35687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:23:55.441101   35687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:23:55.455980   35687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I0729 17:23:55.456373   35687 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:23:55.456809   35687 main.go:141] libmachine: Using API Version  1
	I0729 17:23:55.456828   35687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:23:55.457121   35687 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:23:55.457283   35687 main.go:141] libmachine: (ha-900414-m04) Calling .DriverName
	I0729 17:23:55.457463   35687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:23:55.457486   35687 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHHostname
	I0729 17:23:55.460099   35687 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:55.460444   35687 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:23:55.460470   35687 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:23:55.460607   35687 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHPort
	I0729 17:23:55.460750   35687 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHKeyPath
	I0729 17:23:55.460902   35687 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHUsername
	I0729 17:23:55.461011   35687 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m04/id_rsa Username:docker}
	I0729 17:23:55.542248   35687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:23:55.557545   35687 status.go:257] ha-900414-m04 status: &{Name:ha-900414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr: exit status 7 (621.219168ms)

                                                
                                                
-- stdout --
	ha-900414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-900414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:24:06.119417   35808 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:24:06.119668   35808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:24:06.119678   35808 out.go:304] Setting ErrFile to fd 2...
	I0729 17:24:06.119683   35808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:24:06.119846   35808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:24:06.120036   35808 out.go:298] Setting JSON to false
	I0729 17:24:06.120062   35808 mustload.go:65] Loading cluster: ha-900414
	I0729 17:24:06.120197   35808 notify.go:220] Checking for updates...
	I0729 17:24:06.120564   35808 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:24:06.120585   35808 status.go:255] checking status of ha-900414 ...
	I0729 17:24:06.121028   35808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:24:06.121076   35808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:24:06.140847   35808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0729 17:24:06.141374   35808 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:24:06.141897   35808 main.go:141] libmachine: Using API Version  1
	I0729 17:24:06.141919   35808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:24:06.142223   35808 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:24:06.142427   35808 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:24:06.144099   35808 status.go:330] ha-900414 host status = "Running" (err=<nil>)
	I0729 17:24:06.144112   35808 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:24:06.144408   35808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:24:06.144441   35808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:24:06.158909   35808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41987
	I0729 17:24:06.159456   35808 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:24:06.159876   35808 main.go:141] libmachine: Using API Version  1
	I0729 17:24:06.159896   35808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:24:06.160197   35808 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:24:06.160379   35808 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:24:06.163192   35808 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:24:06.163639   35808 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:24:06.163668   35808 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:24:06.163754   35808 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:24:06.164050   35808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:24:06.164082   35808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:24:06.178486   35808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35175
	I0729 17:24:06.178919   35808 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:24:06.179402   35808 main.go:141] libmachine: Using API Version  1
	I0729 17:24:06.179419   35808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:24:06.179703   35808 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:24:06.179866   35808 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:24:06.180051   35808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:24:06.180070   35808 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:24:06.182714   35808 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:24:06.183136   35808 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:24:06.183164   35808 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:24:06.183299   35808 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:24:06.183453   35808 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:24:06.183602   35808 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:24:06.183719   35808 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:24:06.272970   35808 ssh_runner.go:195] Run: systemctl --version
	I0729 17:24:06.279180   35808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:24:06.295442   35808 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:24:06.295468   35808 api_server.go:166] Checking apiserver status ...
	I0729 17:24:06.295503   35808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:24:06.309945   35808 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup
	W0729 17:24:06.321383   35808 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:24:06.321435   35808 ssh_runner.go:195] Run: ls
	I0729 17:24:06.325859   35808 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:24:06.330289   35808 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:24:06.330314   35808 status.go:422] ha-900414 apiserver status = Running (err=<nil>)
	I0729 17:24:06.330328   35808 status.go:257] ha-900414 status: &{Name:ha-900414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:24:06.330348   35808 status.go:255] checking status of ha-900414-m02 ...
	I0729 17:24:06.330767   35808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:24:06.330807   35808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:24:06.347244   35808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44879
	I0729 17:24:06.347612   35808 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:24:06.348029   35808 main.go:141] libmachine: Using API Version  1
	I0729 17:24:06.348048   35808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:24:06.348634   35808 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:24:06.348824   35808 main.go:141] libmachine: (ha-900414-m02) Calling .GetState
	I0729 17:24:06.350436   35808 status.go:330] ha-900414-m02 host status = "Stopped" (err=<nil>)
	I0729 17:24:06.350454   35808 status.go:343] host is not running, skipping remaining checks
	I0729 17:24:06.350461   35808 status.go:257] ha-900414-m02 status: &{Name:ha-900414-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:24:06.350478   35808 status.go:255] checking status of ha-900414-m03 ...
	I0729 17:24:06.350882   35808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:24:06.350925   35808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:24:06.365537   35808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36997
	I0729 17:24:06.365947   35808 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:24:06.366457   35808 main.go:141] libmachine: Using API Version  1
	I0729 17:24:06.366478   35808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:24:06.366804   35808 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:24:06.367001   35808 main.go:141] libmachine: (ha-900414-m03) Calling .GetState
	I0729 17:24:06.368413   35808 status.go:330] ha-900414-m03 host status = "Running" (err=<nil>)
	I0729 17:24:06.368431   35808 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:24:06.368707   35808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:24:06.368741   35808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:24:06.383855   35808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37351
	I0729 17:24:06.384220   35808 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:24:06.384632   35808 main.go:141] libmachine: Using API Version  1
	I0729 17:24:06.384651   35808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:24:06.384940   35808 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:24:06.385128   35808 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:24:06.387714   35808 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:24:06.388118   35808 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:24:06.388144   35808 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:24:06.388268   35808 host.go:66] Checking if "ha-900414-m03" exists ...
	I0729 17:24:06.388587   35808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:24:06.388623   35808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:24:06.403230   35808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I0729 17:24:06.403590   35808 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:24:06.404057   35808 main.go:141] libmachine: Using API Version  1
	I0729 17:24:06.404076   35808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:24:06.404377   35808 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:24:06.404555   35808 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:24:06.404710   35808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:24:06.404728   35808 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:24:06.407178   35808 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:24:06.407587   35808 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:24:06.407610   35808 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:24:06.407745   35808 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:24:06.407908   35808 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:24:06.408068   35808 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:24:06.408256   35808 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:24:06.490563   35808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:24:06.505294   35808 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:24:06.505332   35808 api_server.go:166] Checking apiserver status ...
	I0729 17:24:06.505386   35808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:24:06.519996   35808 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup
	W0729 17:24:06.530738   35808 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:24:06.530805   35808 ssh_runner.go:195] Run: ls
	I0729 17:24:06.535589   35808 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:24:06.539985   35808 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:24:06.540015   35808 status.go:422] ha-900414-m03 apiserver status = Running (err=<nil>)
	I0729 17:24:06.540027   35808 status.go:257] ha-900414-m03 status: &{Name:ha-900414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:24:06.540043   35808 status.go:255] checking status of ha-900414-m04 ...
	I0729 17:24:06.540371   35808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:24:06.540410   35808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:24:06.556498   35808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41197
	I0729 17:24:06.556932   35808 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:24:06.557370   35808 main.go:141] libmachine: Using API Version  1
	I0729 17:24:06.557387   35808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:24:06.557723   35808 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:24:06.557951   35808 main.go:141] libmachine: (ha-900414-m04) Calling .GetState
	I0729 17:24:06.559530   35808 status.go:330] ha-900414-m04 host status = "Running" (err=<nil>)
	I0729 17:24:06.559545   35808 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:24:06.559828   35808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:24:06.559868   35808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:24:06.576187   35808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I0729 17:24:06.576602   35808 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:24:06.577109   35808 main.go:141] libmachine: Using API Version  1
	I0729 17:24:06.577135   35808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:24:06.577470   35808 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:24:06.577684   35808 main.go:141] libmachine: (ha-900414-m04) Calling .GetIP
	I0729 17:24:06.580451   35808 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:24:06.580936   35808 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:24:06.580951   35808 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:24:06.581140   35808 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:24:06.581452   35808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:24:06.581488   35808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:24:06.596550   35808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0729 17:24:06.597014   35808 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:24:06.597485   35808 main.go:141] libmachine: Using API Version  1
	I0729 17:24:06.597518   35808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:24:06.597844   35808 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:24:06.598038   35808 main.go:141] libmachine: (ha-900414-m04) Calling .DriverName
	I0729 17:24:06.598215   35808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:24:06.598235   35808 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHHostname
	I0729 17:24:06.601043   35808 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:24:06.601473   35808 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:24:06.601498   35808 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:24:06.601685   35808 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHPort
	I0729 17:24:06.601837   35808 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHKeyPath
	I0729 17:24:06.601947   35808 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHUsername
	I0729 17:24:06.602039   35808 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m04/id_rsa Username:docker}
	I0729 17:24:06.682714   35808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:24:06.699629   35808 status.go:257] ha-900414-m04 status: &{Name:ha-900414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-900414 -n ha-900414
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-900414 logs -n 25: (1.368779056s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m03:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414:/home/docker/cp-test_ha-900414-m03_ha-900414.txt                       |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414 sudo cat                                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m03_ha-900414.txt                                 |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m03:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m02:/home/docker/cp-test_ha-900414-m03_ha-900414-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414-m02 sudo cat                                          | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m03_ha-900414-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m03:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04:/home/docker/cp-test_ha-900414-m03_ha-900414-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414-m04 sudo cat                                          | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m03_ha-900414-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-900414 cp testdata/cp-test.txt                                                | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3654370545/001/cp-test_ha-900414-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414:/home/docker/cp-test_ha-900414-m04_ha-900414.txt                       |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414 sudo cat                                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m04_ha-900414.txt                                 |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m02:/home/docker/cp-test_ha-900414-m04_ha-900414-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414-m02 sudo cat                                          | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m04_ha-900414-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m03:/home/docker/cp-test_ha-900414-m04_ha-900414-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414-m03 sudo cat                                          | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m04_ha-900414-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-900414 node stop m02 -v=7                                                     | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-900414 node start m02 -v=7                                                    | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 17:15:59
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 17:15:59.676568   29751 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:15:59.676958   29751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:15:59.676978   29751 out.go:304] Setting ErrFile to fd 2...
	I0729 17:15:59.676987   29751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:15:59.677510   29751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:15:59.678388   29751 out.go:298] Setting JSON to false
	I0729 17:15:59.679421   29751 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3512,"bootTime":1722269848,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:15:59.679474   29751 start.go:139] virtualization: kvm guest
	I0729 17:15:59.681222   29751 out.go:177] * [ha-900414] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 17:15:59.682710   29751 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 17:15:59.682718   29751 notify.go:220] Checking for updates...
	I0729 17:15:59.684026   29751 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:15:59.685288   29751 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 17:15:59.686510   29751 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:15:59.687630   29751 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 17:15:59.688655   29751 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:15:59.689882   29751 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:15:59.724621   29751 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 17:15:59.725696   29751 start.go:297] selected driver: kvm2
	I0729 17:15:59.725706   29751 start.go:901] validating driver "kvm2" against <nil>
	I0729 17:15:59.725715   29751 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:15:59.726404   29751 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:15:59.726470   29751 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 17:15:59.741438   29751 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 17:15:59.741474   29751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:15:59.741694   29751 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:15:59.741750   29751 cni.go:84] Creating CNI manager for ""
	I0729 17:15:59.741761   29751 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 17:15:59.741767   29751 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 17:15:59.741821   29751 start.go:340] cluster config:
	{Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0729 17:15:59.741914   29751 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:15:59.743591   29751 out.go:177] * Starting "ha-900414" primary control-plane node in "ha-900414" cluster
	I0729 17:15:59.744900   29751 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:15:59.744955   29751 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 17:15:59.744968   29751 cache.go:56] Caching tarball of preloaded images
	I0729 17:15:59.745055   29751 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:15:59.745068   29751 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:15:59.745332   29751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:15:59.745352   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json: {Name:mk6b9bd4ecd2940fba0f12ae60de6d6e9b718e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:15:59.745485   29751 start.go:360] acquireMachinesLock for ha-900414: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:15:59.745525   29751 start.go:364] duration metric: took 28.47µs to acquireMachinesLock for "ha-900414"
	I0729 17:15:59.745543   29751 start.go:93] Provisioning new machine with config: &{Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:15:59.745597   29751 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 17:15:59.747636   29751 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:15:59.747748   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:15:59.747779   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:15:59.762097   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42391
	I0729 17:15:59.762484   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:15:59.762945   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:15:59.762965   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:15:59.763285   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:15:59.763435   29751 main.go:141] libmachine: (ha-900414) Calling .GetMachineName
	I0729 17:15:59.763582   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:15:59.763718   29751 start.go:159] libmachine.API.Create for "ha-900414" (driver="kvm2")
	I0729 17:15:59.763740   29751 client.go:168] LocalClient.Create starting
	I0729 17:15:59.763769   29751 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem
	I0729 17:15:59.763803   29751 main.go:141] libmachine: Decoding PEM data...
	I0729 17:15:59.763818   29751 main.go:141] libmachine: Parsing certificate...
	I0729 17:15:59.763871   29751 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem
	I0729 17:15:59.763889   29751 main.go:141] libmachine: Decoding PEM data...
	I0729 17:15:59.763908   29751 main.go:141] libmachine: Parsing certificate...
	I0729 17:15:59.763931   29751 main.go:141] libmachine: Running pre-create checks...
	I0729 17:15:59.763939   29751 main.go:141] libmachine: (ha-900414) Calling .PreCreateCheck
	I0729 17:15:59.764279   29751 main.go:141] libmachine: (ha-900414) Calling .GetConfigRaw
	I0729 17:15:59.764582   29751 main.go:141] libmachine: Creating machine...
	I0729 17:15:59.764593   29751 main.go:141] libmachine: (ha-900414) Calling .Create
	I0729 17:15:59.764698   29751 main.go:141] libmachine: (ha-900414) Creating KVM machine...
	I0729 17:15:59.765861   29751 main.go:141] libmachine: (ha-900414) DBG | found existing default KVM network
	I0729 17:15:59.766644   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:15:59.766505   29790 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0729 17:15:59.766664   29751 main.go:141] libmachine: (ha-900414) DBG | created network xml: 
	I0729 17:15:59.766678   29751 main.go:141] libmachine: (ha-900414) DBG | <network>
	I0729 17:15:59.766693   29751 main.go:141] libmachine: (ha-900414) DBG |   <name>mk-ha-900414</name>
	I0729 17:15:59.766704   29751 main.go:141] libmachine: (ha-900414) DBG |   <dns enable='no'/>
	I0729 17:15:59.766714   29751 main.go:141] libmachine: (ha-900414) DBG |   
	I0729 17:15:59.766726   29751 main.go:141] libmachine: (ha-900414) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 17:15:59.766736   29751 main.go:141] libmachine: (ha-900414) DBG |     <dhcp>
	I0729 17:15:59.766760   29751 main.go:141] libmachine: (ha-900414) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 17:15:59.766776   29751 main.go:141] libmachine: (ha-900414) DBG |     </dhcp>
	I0729 17:15:59.766786   29751 main.go:141] libmachine: (ha-900414) DBG |   </ip>
	I0729 17:15:59.766800   29751 main.go:141] libmachine: (ha-900414) DBG |   
	I0729 17:15:59.766812   29751 main.go:141] libmachine: (ha-900414) DBG | </network>
	I0729 17:15:59.766821   29751 main.go:141] libmachine: (ha-900414) DBG | 
	I0729 17:15:59.771617   29751 main.go:141] libmachine: (ha-900414) DBG | trying to create private KVM network mk-ha-900414 192.168.39.0/24...
	I0729 17:15:59.836965   29751 main.go:141] libmachine: (ha-900414) DBG | private KVM network mk-ha-900414 192.168.39.0/24 created
	I0729 17:15:59.836997   29751 main.go:141] libmachine: (ha-900414) Setting up store path in /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414 ...
	I0729 17:15:59.837010   29751 main.go:141] libmachine: (ha-900414) Building disk image from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 17:15:59.837021   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:15:59.836933   29790 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:15:59.837167   29751 main.go:141] libmachine: (ha-900414) Downloading /home/jenkins/minikube-integration/19345-11206/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 17:16:00.074746   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:00.074622   29790 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa...
	I0729 17:16:00.313510   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:00.313359   29790 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/ha-900414.rawdisk...
	I0729 17:16:00.313549   29751 main.go:141] libmachine: (ha-900414) DBG | Writing magic tar header
	I0729 17:16:00.313564   29751 main.go:141] libmachine: (ha-900414) DBG | Writing SSH key tar header
	I0729 17:16:00.313577   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:00.313507   29790 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414 ...
	I0729 17:16:00.313661   29751 main.go:141] libmachine: (ha-900414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414
	I0729 17:16:00.313679   29751 main.go:141] libmachine: (ha-900414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines
	I0729 17:16:00.313690   29751 main.go:141] libmachine: (ha-900414) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414 (perms=drwx------)
	I0729 17:16:00.313699   29751 main.go:141] libmachine: (ha-900414) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines (perms=drwxr-xr-x)
	I0729 17:16:00.313705   29751 main.go:141] libmachine: (ha-900414) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube (perms=drwxr-xr-x)
	I0729 17:16:00.313712   29751 main.go:141] libmachine: (ha-900414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:16:00.313722   29751 main.go:141] libmachine: (ha-900414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206
	I0729 17:16:00.313728   29751 main.go:141] libmachine: (ha-900414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 17:16:00.313735   29751 main.go:141] libmachine: (ha-900414) DBG | Checking permissions on dir: /home/jenkins
	I0729 17:16:00.313742   29751 main.go:141] libmachine: (ha-900414) DBG | Checking permissions on dir: /home
	I0729 17:16:00.313750   29751 main.go:141] libmachine: (ha-900414) DBG | Skipping /home - not owner
	I0729 17:16:00.313760   29751 main.go:141] libmachine: (ha-900414) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206 (perms=drwxrwxr-x)
	I0729 17:16:00.313797   29751 main.go:141] libmachine: (ha-900414) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 17:16:00.313816   29751 main.go:141] libmachine: (ha-900414) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 17:16:00.313838   29751 main.go:141] libmachine: (ha-900414) Creating domain...
	I0729 17:16:00.314925   29751 main.go:141] libmachine: (ha-900414) define libvirt domain using xml: 
	I0729 17:16:00.314946   29751 main.go:141] libmachine: (ha-900414) <domain type='kvm'>
	I0729 17:16:00.314956   29751 main.go:141] libmachine: (ha-900414)   <name>ha-900414</name>
	I0729 17:16:00.314968   29751 main.go:141] libmachine: (ha-900414)   <memory unit='MiB'>2200</memory>
	I0729 17:16:00.314979   29751 main.go:141] libmachine: (ha-900414)   <vcpu>2</vcpu>
	I0729 17:16:00.314984   29751 main.go:141] libmachine: (ha-900414)   <features>
	I0729 17:16:00.314989   29751 main.go:141] libmachine: (ha-900414)     <acpi/>
	I0729 17:16:00.314993   29751 main.go:141] libmachine: (ha-900414)     <apic/>
	I0729 17:16:00.315020   29751 main.go:141] libmachine: (ha-900414)     <pae/>
	I0729 17:16:00.315048   29751 main.go:141] libmachine: (ha-900414)     
	I0729 17:16:00.315058   29751 main.go:141] libmachine: (ha-900414)   </features>
	I0729 17:16:00.315063   29751 main.go:141] libmachine: (ha-900414)   <cpu mode='host-passthrough'>
	I0729 17:16:00.315068   29751 main.go:141] libmachine: (ha-900414)   
	I0729 17:16:00.315071   29751 main.go:141] libmachine: (ha-900414)   </cpu>
	I0729 17:16:00.315076   29751 main.go:141] libmachine: (ha-900414)   <os>
	I0729 17:16:00.315081   29751 main.go:141] libmachine: (ha-900414)     <type>hvm</type>
	I0729 17:16:00.315086   29751 main.go:141] libmachine: (ha-900414)     <boot dev='cdrom'/>
	I0729 17:16:00.315092   29751 main.go:141] libmachine: (ha-900414)     <boot dev='hd'/>
	I0729 17:16:00.315097   29751 main.go:141] libmachine: (ha-900414)     <bootmenu enable='no'/>
	I0729 17:16:00.315104   29751 main.go:141] libmachine: (ha-900414)   </os>
	I0729 17:16:00.315109   29751 main.go:141] libmachine: (ha-900414)   <devices>
	I0729 17:16:00.315116   29751 main.go:141] libmachine: (ha-900414)     <disk type='file' device='cdrom'>
	I0729 17:16:00.315123   29751 main.go:141] libmachine: (ha-900414)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/boot2docker.iso'/>
	I0729 17:16:00.315134   29751 main.go:141] libmachine: (ha-900414)       <target dev='hdc' bus='scsi'/>
	I0729 17:16:00.315157   29751 main.go:141] libmachine: (ha-900414)       <readonly/>
	I0729 17:16:00.315176   29751 main.go:141] libmachine: (ha-900414)     </disk>
	I0729 17:16:00.315189   29751 main.go:141] libmachine: (ha-900414)     <disk type='file' device='disk'>
	I0729 17:16:00.315201   29751 main.go:141] libmachine: (ha-900414)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 17:16:00.315218   29751 main.go:141] libmachine: (ha-900414)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/ha-900414.rawdisk'/>
	I0729 17:16:00.315230   29751 main.go:141] libmachine: (ha-900414)       <target dev='hda' bus='virtio'/>
	I0729 17:16:00.315241   29751 main.go:141] libmachine: (ha-900414)     </disk>
	I0729 17:16:00.315254   29751 main.go:141] libmachine: (ha-900414)     <interface type='network'>
	I0729 17:16:00.315268   29751 main.go:141] libmachine: (ha-900414)       <source network='mk-ha-900414'/>
	I0729 17:16:00.315279   29751 main.go:141] libmachine: (ha-900414)       <model type='virtio'/>
	I0729 17:16:00.315288   29751 main.go:141] libmachine: (ha-900414)     </interface>
	I0729 17:16:00.315299   29751 main.go:141] libmachine: (ha-900414)     <interface type='network'>
	I0729 17:16:00.315308   29751 main.go:141] libmachine: (ha-900414)       <source network='default'/>
	I0729 17:16:00.315318   29751 main.go:141] libmachine: (ha-900414)       <model type='virtio'/>
	I0729 17:16:00.315329   29751 main.go:141] libmachine: (ha-900414)     </interface>
	I0729 17:16:00.315345   29751 main.go:141] libmachine: (ha-900414)     <serial type='pty'>
	I0729 17:16:00.315358   29751 main.go:141] libmachine: (ha-900414)       <target port='0'/>
	I0729 17:16:00.315369   29751 main.go:141] libmachine: (ha-900414)     </serial>
	I0729 17:16:00.315380   29751 main.go:141] libmachine: (ha-900414)     <console type='pty'>
	I0729 17:16:00.315391   29751 main.go:141] libmachine: (ha-900414)       <target type='serial' port='0'/>
	I0729 17:16:00.315415   29751 main.go:141] libmachine: (ha-900414)     </console>
	I0729 17:16:00.315429   29751 main.go:141] libmachine: (ha-900414)     <rng model='virtio'>
	I0729 17:16:00.315443   29751 main.go:141] libmachine: (ha-900414)       <backend model='random'>/dev/random</backend>
	I0729 17:16:00.315453   29751 main.go:141] libmachine: (ha-900414)     </rng>
	I0729 17:16:00.315461   29751 main.go:141] libmachine: (ha-900414)     
	I0729 17:16:00.315470   29751 main.go:141] libmachine: (ha-900414)     
	I0729 17:16:00.315478   29751 main.go:141] libmachine: (ha-900414)   </devices>
	I0729 17:16:00.315487   29751 main.go:141] libmachine: (ha-900414) </domain>
	I0729 17:16:00.315496   29751 main.go:141] libmachine: (ha-900414) 
	I0729 17:16:00.319670   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:06:36:96 in network default
	I0729 17:16:00.320139   29751 main.go:141] libmachine: (ha-900414) Ensuring networks are active...
	I0729 17:16:00.320154   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:00.320747   29751 main.go:141] libmachine: (ha-900414) Ensuring network default is active
	I0729 17:16:00.320974   29751 main.go:141] libmachine: (ha-900414) Ensuring network mk-ha-900414 is active
	I0729 17:16:00.321597   29751 main.go:141] libmachine: (ha-900414) Getting domain xml...
	I0729 17:16:00.322398   29751 main.go:141] libmachine: (ha-900414) Creating domain...
	I0729 17:16:01.503985   29751 main.go:141] libmachine: (ha-900414) Waiting to get IP...
	I0729 17:16:01.504837   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:01.505292   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:01.505334   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:01.505277   29790 retry.go:31] will retry after 223.456895ms: waiting for machine to come up
	I0729 17:16:01.730850   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:01.731334   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:01.731360   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:01.731285   29790 retry.go:31] will retry after 358.601967ms: waiting for machine to come up
	I0729 17:16:02.092010   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:02.092531   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:02.092557   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:02.092480   29790 retry.go:31] will retry after 326.470702ms: waiting for machine to come up
	I0729 17:16:02.420941   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:02.421342   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:02.421367   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:02.421293   29790 retry.go:31] will retry after 592.274293ms: waiting for machine to come up
	I0729 17:16:03.014934   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:03.015310   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:03.015334   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:03.015269   29790 retry.go:31] will retry after 565.688093ms: waiting for machine to come up
	I0729 17:16:03.583027   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:03.583564   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:03.583589   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:03.583528   29790 retry.go:31] will retry after 638.104329ms: waiting for machine to come up
	I0729 17:16:04.223289   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:04.223682   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:04.223720   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:04.223653   29790 retry.go:31] will retry after 945.413379ms: waiting for machine to come up
	I0729 17:16:05.170448   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:05.170854   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:05.170879   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:05.170791   29790 retry.go:31] will retry after 1.059633806s: waiting for machine to come up
	I0729 17:16:06.232013   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:06.232499   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:06.232527   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:06.232449   29790 retry.go:31] will retry after 1.16821857s: waiting for machine to come up
	I0729 17:16:07.402715   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:07.403242   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:07.403271   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:07.403184   29790 retry.go:31] will retry after 1.541797905s: waiting for machine to come up
	I0729 17:16:08.947064   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:08.947472   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:08.947493   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:08.947452   29790 retry.go:31] will retry after 2.188109829s: waiting for machine to come up
	I0729 17:16:11.137679   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:11.138142   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:11.138169   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:11.138086   29790 retry.go:31] will retry after 3.499780988s: waiting for machine to come up
	I0729 17:16:14.641759   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:14.642210   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:14.642231   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:14.642166   29790 retry.go:31] will retry after 4.332731547s: waiting for machine to come up
	I0729 17:16:18.980304   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:18.980832   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find current IP address of domain ha-900414 in network mk-ha-900414
	I0729 17:16:18.980864   29751 main.go:141] libmachine: (ha-900414) DBG | I0729 17:16:18.980767   29790 retry.go:31] will retry after 5.360938119s: waiting for machine to come up
	I0729 17:16:24.343363   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.343835   29751 main.go:141] libmachine: (ha-900414) Found IP for machine: 192.168.39.114
	I0729 17:16:24.343874   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has current primary IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.343888   29751 main.go:141] libmachine: (ha-900414) Reserving static IP address...
	I0729 17:16:24.344201   29751 main.go:141] libmachine: (ha-900414) DBG | unable to find host DHCP lease matching {name: "ha-900414", mac: "52:54:00:5a:29:8d", ip: "192.168.39.114"} in network mk-ha-900414
	I0729 17:16:24.414982   29751 main.go:141] libmachine: (ha-900414) DBG | Getting to WaitForSSH function...
	I0729 17:16:24.415004   29751 main.go:141] libmachine: (ha-900414) Reserved static IP address: 192.168.39.114
	I0729 17:16:24.415019   29751 main.go:141] libmachine: (ha-900414) Waiting for SSH to be available...
	I0729 17:16:24.417039   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.417427   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:24.417455   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.417543   29751 main.go:141] libmachine: (ha-900414) DBG | Using SSH client type: external
	I0729 17:16:24.417595   29751 main.go:141] libmachine: (ha-900414) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa (-rw-------)
	I0729 17:16:24.417626   29751 main.go:141] libmachine: (ha-900414) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:16:24.417645   29751 main.go:141] libmachine: (ha-900414) DBG | About to run SSH command:
	I0729 17:16:24.417662   29751 main.go:141] libmachine: (ha-900414) DBG | exit 0
	I0729 17:16:24.542583   29751 main.go:141] libmachine: (ha-900414) DBG | SSH cmd err, output: <nil>: 
	I0729 17:16:24.542918   29751 main.go:141] libmachine: (ha-900414) KVM machine creation complete!
	I0729 17:16:24.543406   29751 main.go:141] libmachine: (ha-900414) Calling .GetConfigRaw
	I0729 17:16:24.543927   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:24.544157   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:24.544367   29751 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 17:16:24.544384   29751 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:16:24.545826   29751 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 17:16:24.545841   29751 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 17:16:24.545848   29751 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 17:16:24.545858   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:24.548387   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.548744   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:24.548768   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.548886   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:24.549058   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:24.549180   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:24.549292   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:24.549415   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:16:24.549590   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:16:24.549602   29751 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 17:16:24.653629   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:16:24.653650   29751 main.go:141] libmachine: Detecting the provisioner...
	I0729 17:16:24.653657   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:24.656346   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.656670   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:24.656706   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.656830   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:24.657006   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:24.657165   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:24.657322   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:24.657470   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:16:24.657670   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:16:24.657682   29751 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 17:16:24.763340   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 17:16:24.763408   29751 main.go:141] libmachine: found compatible host: buildroot
	I0729 17:16:24.763416   29751 main.go:141] libmachine: Provisioning with buildroot...
	I0729 17:16:24.763423   29751 main.go:141] libmachine: (ha-900414) Calling .GetMachineName
	I0729 17:16:24.763667   29751 buildroot.go:166] provisioning hostname "ha-900414"
	I0729 17:16:24.763693   29751 main.go:141] libmachine: (ha-900414) Calling .GetMachineName
	I0729 17:16:24.763895   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:24.766542   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.766942   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:24.766967   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.767150   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:24.767284   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:24.767472   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:24.767680   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:24.767869   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:16:24.768029   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:16:24.768041   29751 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-900414 && echo "ha-900414" | sudo tee /etc/hostname
	I0729 17:16:24.888774   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-900414
	
	I0729 17:16:24.888799   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:24.891638   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.892040   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:24.892070   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:24.892197   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:24.892383   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:24.892543   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:24.892676   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:24.892839   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:16:24.893044   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:16:24.893066   29751 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-900414' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-900414/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-900414' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:16:25.007667   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:16:25.007698   29751 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 17:16:25.007739   29751 buildroot.go:174] setting up certificates
	I0729 17:16:25.007751   29751 provision.go:84] configureAuth start
	I0729 17:16:25.007761   29751 main.go:141] libmachine: (ha-900414) Calling .GetMachineName
	I0729 17:16:25.008042   29751 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:16:25.010704   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.011044   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.011078   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.011192   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:25.013536   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.013812   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.013836   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.013995   29751 provision.go:143] copyHostCerts
	I0729 17:16:25.014024   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:16:25.014058   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 17:16:25.014068   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:16:25.014130   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 17:16:25.014217   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:16:25.014235   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 17:16:25.014239   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:16:25.014263   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 17:16:25.014316   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:16:25.014333   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 17:16:25.014339   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:16:25.014374   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 17:16:25.014445   29751 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.ha-900414 san=[127.0.0.1 192.168.39.114 ha-900414 localhost minikube]
	I0729 17:16:25.088399   29751 provision.go:177] copyRemoteCerts
	I0729 17:16:25.088468   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:16:25.088495   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:25.091613   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.091999   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.092027   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.092220   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:25.092394   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:25.092608   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:25.092748   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:16:25.176099   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:16:25.176191   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 17:16:25.200204   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:16:25.200283   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:16:25.223234   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:16:25.223304   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 17:16:25.246533   29751 provision.go:87] duration metric: took 238.768709ms to configureAuth
	I0729 17:16:25.246560   29751 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:16:25.246752   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:16:25.246830   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:25.249458   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.249805   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.249822   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.249988   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:25.250165   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:25.250342   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:25.250491   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:25.250643   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:16:25.250843   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:16:25.250874   29751 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:16:25.519886   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:16:25.519916   29751 main.go:141] libmachine: Checking connection to Docker...
	I0729 17:16:25.519925   29751 main.go:141] libmachine: (ha-900414) Calling .GetURL
	I0729 17:16:25.521139   29751 main.go:141] libmachine: (ha-900414) DBG | Using libvirt version 6000000
	I0729 17:16:25.523401   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.523788   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.523814   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.524023   29751 main.go:141] libmachine: Docker is up and running!
	I0729 17:16:25.524040   29751 main.go:141] libmachine: Reticulating splines...
	I0729 17:16:25.524047   29751 client.go:171] duration metric: took 25.760297654s to LocalClient.Create
	I0729 17:16:25.524069   29751 start.go:167] duration metric: took 25.760350985s to libmachine.API.Create "ha-900414"
	I0729 17:16:25.524077   29751 start.go:293] postStartSetup for "ha-900414" (driver="kvm2")
	I0729 17:16:25.524086   29751 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:16:25.524100   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:25.524350   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:16:25.524370   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:25.526667   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.526989   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.527013   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.527208   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:25.527371   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:25.527499   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:25.527638   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:16:25.608806   29751 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:16:25.613178   29751 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:16:25.613197   29751 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 17:16:25.613251   29751 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 17:16:25.613340   29751 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 17:16:25.613355   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /etc/ssl/certs/183932.pem
	I0729 17:16:25.613474   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:16:25.622665   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:16:25.646959   29751 start.go:296] duration metric: took 122.870417ms for postStartSetup
	I0729 17:16:25.647002   29751 main.go:141] libmachine: (ha-900414) Calling .GetConfigRaw
	I0729 17:16:25.647614   29751 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:16:25.650408   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.650713   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.650735   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.650966   29751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:16:25.651158   29751 start.go:128] duration metric: took 25.90555269s to createHost
	I0729 17:16:25.651180   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:25.653612   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.653961   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.653982   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.654123   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:25.654303   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:25.654488   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:25.654626   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:25.654780   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:16:25.654955   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:16:25.654975   29751 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:16:25.763249   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722273385.741222344
	
	I0729 17:16:25.763271   29751 fix.go:216] guest clock: 1722273385.741222344
	I0729 17:16:25.763286   29751 fix.go:229] Guest: 2024-07-29 17:16:25.741222344 +0000 UTC Remote: 2024-07-29 17:16:25.651169706 +0000 UTC m=+26.007429590 (delta=90.052638ms)
	I0729 17:16:25.763306   29751 fix.go:200] guest clock delta is within tolerance: 90.052638ms
	I0729 17:16:25.763311   29751 start.go:83] releasing machines lock for "ha-900414", held for 26.01777943s
	I0729 17:16:25.763328   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:25.763585   29751 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:16:25.766107   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.766581   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.766609   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.766733   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:25.767155   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:25.767309   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:25.767396   29751 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:16:25.767429   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:25.767498   29751 ssh_runner.go:195] Run: cat /version.json
	I0729 17:16:25.767514   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:25.770326   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.770535   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.770764   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.770790   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.770973   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:25.770985   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:25.771011   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:25.771170   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:25.771193   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:25.771292   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:25.771355   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:25.771422   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:25.771483   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:16:25.771571   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:16:25.884230   29751 ssh_runner.go:195] Run: systemctl --version
	I0729 17:16:25.890429   29751 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:16:26.046533   29751 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:16:26.052249   29751 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:16:26.052301   29751 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:16:26.069130   29751 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 17:16:26.069147   29751 start.go:495] detecting cgroup driver to use...
	I0729 17:16:26.069208   29751 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:16:26.086635   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:16:26.100857   29751 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:16:26.100909   29751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:16:26.114412   29751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:16:26.131217   29751 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:16:26.260546   29751 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:16:26.409176   29751 docker.go:233] disabling docker service ...
	I0729 17:16:26.409245   29751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:16:26.423523   29751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:16:26.436099   29751 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:16:26.577524   29751 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:16:26.703925   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:16:26.717445   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:16:26.735004   29751 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:16:26.735048   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:16:26.745757   29751 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:16:26.745827   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:16:26.756432   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:16:26.766881   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:16:26.777521   29751 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:16:26.788302   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:16:26.799436   29751 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:16:26.819106   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:16:26.829194   29751 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:16:26.838407   29751 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 17:16:26.838466   29751 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 17:16:26.851462   29751 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:16:26.861215   29751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:16:26.985901   29751 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:16:27.125514   29751 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:16:27.125590   29751 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:16:27.130374   29751 start.go:563] Will wait 60s for crictl version
	I0729 17:16:27.130422   29751 ssh_runner.go:195] Run: which crictl
	I0729 17:16:27.134213   29751 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:16:27.172216   29751 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:16:27.172305   29751 ssh_runner.go:195] Run: crio --version
	I0729 17:16:27.199795   29751 ssh_runner.go:195] Run: crio --version
	I0729 17:16:27.229912   29751 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:16:27.231310   29751 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:16:27.234180   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:27.234609   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:27.234642   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:27.234789   29751 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:16:27.239065   29751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:16:27.252230   29751 kubeadm.go:883] updating cluster {Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 17:16:27.252330   29751 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:16:27.252386   29751 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:16:27.284998   29751 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 17:16:27.285145   29751 ssh_runner.go:195] Run: which lz4
	I0729 17:16:27.289201   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 17:16:27.289299   29751 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 17:16:27.293655   29751 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 17:16:27.293681   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 17:16:28.663963   29751 crio.go:462] duration metric: took 1.374697458s to copy over tarball
	I0729 17:16:28.664026   29751 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 17:16:30.851721   29751 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.187668412s)
	I0729 17:16:30.851741   29751 crio.go:469] duration metric: took 2.18775491s to extract the tarball
	I0729 17:16:30.851748   29751 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 17:16:30.889486   29751 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:16:30.935348   29751 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 17:16:30.935372   29751 cache_images.go:84] Images are preloaded, skipping loading
	I0729 17:16:30.935381   29751 kubeadm.go:934] updating node { 192.168.39.114 8443 v1.30.3 crio true true} ...
	I0729 17:16:30.935517   29751 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-900414 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:16:30.935601   29751 ssh_runner.go:195] Run: crio config
	I0729 17:16:30.979532   29751 cni.go:84] Creating CNI manager for ""
	I0729 17:16:30.979553   29751 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 17:16:30.979563   29751 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 17:16:30.979581   29751 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-900414 NodeName:ha-900414 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 17:16:30.979732   29751 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-900414"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 17:16:30.979759   29751 kube-vip.go:115] generating kube-vip config ...
	I0729 17:16:30.979803   29751 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 17:16:30.998345   29751 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 17:16:30.998464   29751 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 17:16:30.998526   29751 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:16:31.009025   29751 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 17:16:31.009094   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 17:16:31.019681   29751 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 17:16:31.036876   29751 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:16:31.054074   29751 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 17:16:31.070322   29751 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 17:16:31.086267   29751 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 17:16:31.089926   29751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:16:31.102733   29751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:16:31.225836   29751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:16:31.242958   29751 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414 for IP: 192.168.39.114
	I0729 17:16:31.242977   29751 certs.go:194] generating shared ca certs ...
	I0729 17:16:31.242991   29751 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:31.243144   29751 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 17:16:31.243191   29751 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 17:16:31.243200   29751 certs.go:256] generating profile certs ...
	I0729 17:16:31.243259   29751 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key
	I0729 17:16:31.243273   29751 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.crt with IP's: []
	I0729 17:16:31.374501   29751 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.crt ...
	I0729 17:16:31.374531   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.crt: {Name:mkb7b43c2afb7f6dbf658b43148a8f3bb44cbc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:31.374700   29751 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key ...
	I0729 17:16:31.374709   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key: {Name:mkb05bbb91e12e97873bf109d01e2f6483e49b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:31.374785   29751 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.b5031bbd
	I0729 17:16:31.374800   29751 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.b5031bbd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.254]
	I0729 17:16:31.695954   29751 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.b5031bbd ...
	I0729 17:16:31.695982   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.b5031bbd: {Name:mkbb6153a90029f4010f08b3c029806b5b14b049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:31.696158   29751 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.b5031bbd ...
	I0729 17:16:31.696172   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.b5031bbd: {Name:mk445d3afe4dca68bf414d39ecebb58f1ab9a59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:31.696266   29751 certs.go:381] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.b5031bbd -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt
	I0729 17:16:31.696364   29751 certs.go:385] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.b5031bbd -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key
	I0729 17:16:31.696440   29751 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key
	I0729 17:16:31.696460   29751 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt with IP's: []
	I0729 17:16:31.758432   29751 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt ...
	I0729 17:16:31.758456   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt: {Name:mkaefbda7a5c157d6370f92a63212228c1be898d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:31.758609   29751 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key ...
	I0729 17:16:31.758621   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key: {Name:mk83611fe6757acef0f970b5a2af1c987798c2d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:31.758707   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 17:16:31.758724   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 17:16:31.758738   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 17:16:31.758758   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 17:16:31.758776   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 17:16:31.758791   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 17:16:31.758804   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 17:16:31.758817   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 17:16:31.758888   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 17:16:31.758930   29751 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 17:16:31.758944   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:16:31.758975   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:16:31.759004   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:16:31.759035   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 17:16:31.759086   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:16:31.759140   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:16:31.759176   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem -> /usr/share/ca-certificates/18393.pem
	I0729 17:16:31.759196   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /usr/share/ca-certificates/183932.pem
	I0729 17:16:31.759703   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:16:31.785428   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:16:31.809264   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:16:31.832249   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:16:31.855181   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 17:16:31.878759   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 17:16:31.901923   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:16:31.924393   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:16:31.947254   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:16:31.970819   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 17:16:31.997211   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 17:16:32.038094   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 17:16:32.061962   29751 ssh_runner.go:195] Run: openssl version
	I0729 17:16:32.068624   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:16:32.080215   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:16:32.084892   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:16:32.084946   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:16:32.090981   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:16:32.102031   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 17:16:32.113730   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 17:16:32.118688   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 17:16:32.118746   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 17:16:32.125152   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 17:16:32.136583   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 17:16:32.147701   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 17:16:32.152181   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 17:16:32.152225   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 17:16:32.158013   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:16:32.168641   29751 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:16:32.172560   29751 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 17:16:32.172615   29751 kubeadm.go:392] StartCluster: {Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:16:32.172698   29751 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 17:16:32.172754   29751 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 17:16:32.209294   29751 cri.go:89] found id: ""
	I0729 17:16:32.209355   29751 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 17:16:32.219469   29751 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 17:16:32.228986   29751 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 17:16:32.239333   29751 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 17:16:32.239351   29751 kubeadm.go:157] found existing configuration files:
	
	I0729 17:16:32.239413   29751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 17:16:32.248360   29751 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 17:16:32.248414   29751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 17:16:32.257941   29751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 17:16:32.267109   29751 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 17:16:32.267167   29751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 17:16:32.276856   29751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 17:16:32.286060   29751 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 17:16:32.286119   29751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 17:16:32.295868   29751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 17:16:32.305171   29751 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 17:16:32.305232   29751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 17:16:32.315037   29751 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 17:16:32.556657   29751 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 17:16:44.366168   29751 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 17:16:44.366224   29751 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 17:16:44.366300   29751 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 17:16:44.366449   29751 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 17:16:44.366579   29751 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 17:16:44.366675   29751 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 17:16:44.368308   29751 out.go:204]   - Generating certificates and keys ...
	I0729 17:16:44.368393   29751 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 17:16:44.368480   29751 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 17:16:44.368585   29751 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 17:16:44.368661   29751 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 17:16:44.368739   29751 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 17:16:44.368807   29751 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 17:16:44.368884   29751 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 17:16:44.369040   29751 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-900414 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I0729 17:16:44.369119   29751 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 17:16:44.369252   29751 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-900414 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I0729 17:16:44.369338   29751 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 17:16:44.369419   29751 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 17:16:44.369458   29751 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 17:16:44.369506   29751 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 17:16:44.369566   29751 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 17:16:44.369663   29751 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 17:16:44.369767   29751 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 17:16:44.369830   29751 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 17:16:44.369900   29751 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 17:16:44.370025   29751 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 17:16:44.370127   29751 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 17:16:44.372294   29751 out.go:204]   - Booting up control plane ...
	I0729 17:16:44.372393   29751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 17:16:44.372472   29751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 17:16:44.372549   29751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 17:16:44.372637   29751 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 17:16:44.372730   29751 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 17:16:44.372773   29751 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 17:16:44.372924   29751 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 17:16:44.372990   29751 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 17:16:44.373039   29751 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.405714ms
	I0729 17:16:44.373102   29751 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 17:16:44.373176   29751 kubeadm.go:310] [api-check] The API server is healthy after 6.044431111s
	I0729 17:16:44.373284   29751 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 17:16:44.373401   29751 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 17:16:44.373450   29751 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 17:16:44.373695   29751 kubeadm.go:310] [mark-control-plane] Marking the node ha-900414 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 17:16:44.373746   29751 kubeadm.go:310] [bootstrap-token] Using token: ccbc6e.3vl1qmuqbu37bz1a
	I0729 17:16:44.375013   29751 out.go:204]   - Configuring RBAC rules ...
	I0729 17:16:44.375101   29751 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 17:16:44.375181   29751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 17:16:44.375300   29751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 17:16:44.375405   29751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 17:16:44.375507   29751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 17:16:44.375609   29751 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 17:16:44.375739   29751 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 17:16:44.375794   29751 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 17:16:44.375858   29751 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 17:16:44.375867   29751 kubeadm.go:310] 
	I0729 17:16:44.375948   29751 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 17:16:44.375957   29751 kubeadm.go:310] 
	I0729 17:16:44.376067   29751 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 17:16:44.376079   29751 kubeadm.go:310] 
	I0729 17:16:44.376125   29751 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 17:16:44.376213   29751 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 17:16:44.376284   29751 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 17:16:44.376294   29751 kubeadm.go:310] 
	I0729 17:16:44.376371   29751 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 17:16:44.376381   29751 kubeadm.go:310] 
	I0729 17:16:44.376446   29751 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 17:16:44.376459   29751 kubeadm.go:310] 
	I0729 17:16:44.376535   29751 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 17:16:44.376646   29751 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 17:16:44.376751   29751 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 17:16:44.376763   29751 kubeadm.go:310] 
	I0729 17:16:44.376875   29751 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 17:16:44.376971   29751 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 17:16:44.376987   29751 kubeadm.go:310] 
	I0729 17:16:44.377089   29751 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ccbc6e.3vl1qmuqbu37bz1a \
	I0729 17:16:44.377215   29751 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 17:16:44.377235   29751 kubeadm.go:310] 	--control-plane 
	I0729 17:16:44.377241   29751 kubeadm.go:310] 
	I0729 17:16:44.377308   29751 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 17:16:44.377316   29751 kubeadm.go:310] 
	I0729 17:16:44.377384   29751 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ccbc6e.3vl1qmuqbu37bz1a \
	I0729 17:16:44.377490   29751 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 17:16:44.377502   29751 cni.go:84] Creating CNI manager for ""
	I0729 17:16:44.377507   29751 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 17:16:44.379813   29751 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 17:16:44.380988   29751 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 17:16:44.386811   29751 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 17:16:44.386828   29751 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 17:16:44.406508   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 17:16:44.772578   29751 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 17:16:44.772637   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:44.772653   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-900414 minikube.k8s.io/updated_at=2024_07_29T17_16_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=ha-900414 minikube.k8s.io/primary=true
	I0729 17:16:44.815083   29751 ops.go:34] apiserver oom_adj: -16
	I0729 17:16:44.954862   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:45.455907   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:45.955132   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:46.455775   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:46.955157   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:47.454942   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:47.955856   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:48.455120   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:48.955010   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:49.455369   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:49.955570   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:50.455913   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:50.955887   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:51.455267   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:51.955546   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:52.455700   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:52.955656   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:53.455646   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:53.955585   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:54.455734   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:54.955004   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:55.455549   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:55.955280   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:56.455160   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:56.955292   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:57.455205   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:16:57.610197   29751 kubeadm.go:1113] duration metric: took 12.83761215s to wait for elevateKubeSystemPrivileges
	I0729 17:16:57.610234   29751 kubeadm.go:394] duration metric: took 25.437623888s to StartCluster
	I0729 17:16:57.610256   29751 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:57.610345   29751 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 17:16:57.611225   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:16:57.611478   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 17:16:57.611490   29751 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:16:57.611514   29751 start.go:241] waiting for startup goroutines ...
	I0729 17:16:57.611522   29751 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 17:16:57.611579   29751 addons.go:69] Setting storage-provisioner=true in profile "ha-900414"
	I0729 17:16:57.611584   29751 addons.go:69] Setting default-storageclass=true in profile "ha-900414"
	I0729 17:16:57.611609   29751 addons.go:234] Setting addon storage-provisioner=true in "ha-900414"
	I0729 17:16:57.611639   29751 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:16:57.611674   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:16:57.611611   29751 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-900414"
	I0729 17:16:57.611997   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:16:57.612025   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:16:57.612044   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:16:57.612072   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:16:57.626933   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I0729 17:16:57.626966   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40123
	I0729 17:16:57.627401   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:16:57.627410   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:16:57.627909   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:16:57.627924   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:16:57.628051   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:16:57.628070   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:16:57.628246   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:16:57.628364   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:16:57.628489   29751 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:16:57.628845   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:16:57.628882   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:16:57.630437   29751 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 17:16:57.630640   29751 kapi.go:59] client config for ha-900414: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.crt", KeyFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key", CAFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 17:16:57.631041   29751 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 17:16:57.631181   29751 addons.go:234] Setting addon default-storageclass=true in "ha-900414"
	I0729 17:16:57.631211   29751 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:16:57.631431   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:16:57.631460   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:16:57.643727   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42127
	I0729 17:16:57.644309   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:16:57.644832   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:16:57.644855   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:16:57.644944   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46585
	I0729 17:16:57.645218   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:16:57.645270   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:16:57.645373   29751 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:16:57.645699   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:16:57.645717   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:16:57.646055   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:16:57.646502   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:16:57.646524   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:16:57.647190   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:57.649355   29751 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:16:57.650805   29751 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 17:16:57.650819   29751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 17:16:57.650833   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:57.654412   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:57.654806   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:57.654829   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:57.654971   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:57.655140   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:57.655314   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:57.655472   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:16:57.662001   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39219
	I0729 17:16:57.662355   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:16:57.662786   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:16:57.662809   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:16:57.663109   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:16:57.663291   29751 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:16:57.664562   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:16:57.664773   29751 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 17:16:57.664787   29751 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 17:16:57.664806   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:16:57.667300   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:57.667686   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:16:57.667712   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:16:57.667941   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:16:57.668099   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:16:57.668250   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:16:57.668378   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:16:57.767183   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 17:16:57.791168   29751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 17:16:57.847824   29751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 17:16:58.284156   29751 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 17:16:58.284193   29751 main.go:141] libmachine: Making call to close driver server
	I0729 17:16:58.284212   29751 main.go:141] libmachine: (ha-900414) Calling .Close
	I0729 17:16:58.284483   29751 main.go:141] libmachine: (ha-900414) DBG | Closing plugin on server side
	I0729 17:16:58.284516   29751 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:16:58.284528   29751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:16:58.284545   29751 main.go:141] libmachine: Making call to close driver server
	I0729 17:16:58.284554   29751 main.go:141] libmachine: (ha-900414) Calling .Close
	I0729 17:16:58.284794   29751 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:16:58.284807   29751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:16:58.284935   29751 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 17:16:58.284944   29751 round_trippers.go:469] Request Headers:
	I0729 17:16:58.284955   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:16:58.284962   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:16:58.295163   29751 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0729 17:16:58.295695   29751 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 17:16:58.295709   29751 round_trippers.go:469] Request Headers:
	I0729 17:16:58.295719   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:16:58.295728   29751 round_trippers.go:473]     Content-Type: application/json
	I0729 17:16:58.295732   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:16:58.303886   29751 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 17:16:58.304015   29751 main.go:141] libmachine: Making call to close driver server
	I0729 17:16:58.304025   29751 main.go:141] libmachine: (ha-900414) Calling .Close
	I0729 17:16:58.304276   29751 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:16:58.304295   29751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:16:58.304297   29751 main.go:141] libmachine: (ha-900414) DBG | Closing plugin on server side
	I0729 17:16:58.515757   29751 main.go:141] libmachine: Making call to close driver server
	I0729 17:16:58.515781   29751 main.go:141] libmachine: (ha-900414) Calling .Close
	I0729 17:16:58.516040   29751 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:16:58.516057   29751 main.go:141] libmachine: (ha-900414) DBG | Closing plugin on server side
	I0729 17:16:58.516062   29751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:16:58.516072   29751 main.go:141] libmachine: Making call to close driver server
	I0729 17:16:58.516079   29751 main.go:141] libmachine: (ha-900414) Calling .Close
	I0729 17:16:58.516303   29751 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:16:58.516319   29751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:16:58.516321   29751 main.go:141] libmachine: (ha-900414) DBG | Closing plugin on server side
	I0729 17:16:58.518110   29751 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0729 17:16:58.519224   29751 addons.go:510] duration metric: took 907.699792ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0729 17:16:58.519258   29751 start.go:246] waiting for cluster config update ...
	I0729 17:16:58.519272   29751 start.go:255] writing updated cluster config ...
	I0729 17:16:58.520741   29751 out.go:177] 
	I0729 17:16:58.521901   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:16:58.521968   29751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:16:58.523471   29751 out.go:177] * Starting "ha-900414-m02" control-plane node in "ha-900414" cluster
	I0729 17:16:58.524524   29751 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:16:58.524544   29751 cache.go:56] Caching tarball of preloaded images
	I0729 17:16:58.524616   29751 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:16:58.524628   29751 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:16:58.524682   29751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:16:58.524829   29751 start.go:360] acquireMachinesLock for ha-900414-m02: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:16:58.524877   29751 start.go:364] duration metric: took 31.635µs to acquireMachinesLock for "ha-900414-m02"
	I0729 17:16:58.524893   29751 start.go:93] Provisioning new machine with config: &{Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:16:58.524954   29751 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 17:16:58.526343   29751 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:16:58.526429   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:16:58.526451   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:16:58.540615   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45315
	I0729 17:16:58.541056   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:16:58.541504   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:16:58.541524   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:16:58.541833   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:16:58.542024   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetMachineName
	I0729 17:16:58.542159   29751 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:16:58.542309   29751 start.go:159] libmachine.API.Create for "ha-900414" (driver="kvm2")
	I0729 17:16:58.542331   29751 client.go:168] LocalClient.Create starting
	I0729 17:16:58.542373   29751 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem
	I0729 17:16:58.542415   29751 main.go:141] libmachine: Decoding PEM data...
	I0729 17:16:58.542436   29751 main.go:141] libmachine: Parsing certificate...
	I0729 17:16:58.542499   29751 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem
	I0729 17:16:58.542526   29751 main.go:141] libmachine: Decoding PEM data...
	I0729 17:16:58.542541   29751 main.go:141] libmachine: Parsing certificate...
	I0729 17:16:58.542572   29751 main.go:141] libmachine: Running pre-create checks...
	I0729 17:16:58.542583   29751 main.go:141] libmachine: (ha-900414-m02) Calling .PreCreateCheck
	I0729 17:16:58.542728   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetConfigRaw
	I0729 17:16:58.543136   29751 main.go:141] libmachine: Creating machine...
	I0729 17:16:58.543150   29751 main.go:141] libmachine: (ha-900414-m02) Calling .Create
	I0729 17:16:58.543292   29751 main.go:141] libmachine: (ha-900414-m02) Creating KVM machine...
	I0729 17:16:58.544385   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found existing default KVM network
	I0729 17:16:58.544525   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found existing private KVM network mk-ha-900414
	I0729 17:16:58.544645   29751 main.go:141] libmachine: (ha-900414-m02) Setting up store path in /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02 ...
	I0729 17:16:58.544668   29751 main.go:141] libmachine: (ha-900414-m02) Building disk image from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 17:16:58.544721   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:16:58.544636   30168 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:16:58.544853   29751 main.go:141] libmachine: (ha-900414-m02) Downloading /home/jenkins/minikube-integration/19345-11206/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 17:16:58.772906   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:16:58.772779   30168 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa...
	I0729 17:16:58.905768   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:16:58.905649   30168 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/ha-900414-m02.rawdisk...
	I0729 17:16:58.905810   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Writing magic tar header
	I0729 17:16:58.905864   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Writing SSH key tar header
	I0729 17:16:58.905897   29751 main.go:141] libmachine: (ha-900414-m02) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02 (perms=drwx------)
	I0729 17:16:58.905914   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:16:58.905754   30168 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02 ...
	I0729 17:16:58.905938   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02
	I0729 17:16:58.905958   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines
	I0729 17:16:58.905972   29751 main.go:141] libmachine: (ha-900414-m02) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines (perms=drwxr-xr-x)
	I0729 17:16:58.905986   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:16:58.905998   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206
	I0729 17:16:58.906003   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 17:16:58.906018   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 17:16:58.906026   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Checking permissions on dir: /home
	I0729 17:16:58.906040   29751 main.go:141] libmachine: (ha-900414-m02) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube (perms=drwxr-xr-x)
	I0729 17:16:58.906057   29751 main.go:141] libmachine: (ha-900414-m02) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206 (perms=drwxrwxr-x)
	I0729 17:16:58.906069   29751 main.go:141] libmachine: (ha-900414-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 17:16:58.906080   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Skipping /home - not owner
	I0729 17:16:58.906093   29751 main.go:141] libmachine: (ha-900414-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 17:16:58.906100   29751 main.go:141] libmachine: (ha-900414-m02) Creating domain...
	I0729 17:16:58.906988   29751 main.go:141] libmachine: (ha-900414-m02) define libvirt domain using xml: 
	I0729 17:16:58.907009   29751 main.go:141] libmachine: (ha-900414-m02) <domain type='kvm'>
	I0729 17:16:58.907018   29751 main.go:141] libmachine: (ha-900414-m02)   <name>ha-900414-m02</name>
	I0729 17:16:58.907026   29751 main.go:141] libmachine: (ha-900414-m02)   <memory unit='MiB'>2200</memory>
	I0729 17:16:58.907036   29751 main.go:141] libmachine: (ha-900414-m02)   <vcpu>2</vcpu>
	I0729 17:16:58.907048   29751 main.go:141] libmachine: (ha-900414-m02)   <features>
	I0729 17:16:58.907056   29751 main.go:141] libmachine: (ha-900414-m02)     <acpi/>
	I0729 17:16:58.907063   29751 main.go:141] libmachine: (ha-900414-m02)     <apic/>
	I0729 17:16:58.907075   29751 main.go:141] libmachine: (ha-900414-m02)     <pae/>
	I0729 17:16:58.907089   29751 main.go:141] libmachine: (ha-900414-m02)     
	I0729 17:16:58.907096   29751 main.go:141] libmachine: (ha-900414-m02)   </features>
	I0729 17:16:58.907107   29751 main.go:141] libmachine: (ha-900414-m02)   <cpu mode='host-passthrough'>
	I0729 17:16:58.907117   29751 main.go:141] libmachine: (ha-900414-m02)   
	I0729 17:16:58.907126   29751 main.go:141] libmachine: (ha-900414-m02)   </cpu>
	I0729 17:16:58.907138   29751 main.go:141] libmachine: (ha-900414-m02)   <os>
	I0729 17:16:58.907144   29751 main.go:141] libmachine: (ha-900414-m02)     <type>hvm</type>
	I0729 17:16:58.907164   29751 main.go:141] libmachine: (ha-900414-m02)     <boot dev='cdrom'/>
	I0729 17:16:58.907177   29751 main.go:141] libmachine: (ha-900414-m02)     <boot dev='hd'/>
	I0729 17:16:58.907187   29751 main.go:141] libmachine: (ha-900414-m02)     <bootmenu enable='no'/>
	I0729 17:16:58.907197   29751 main.go:141] libmachine: (ha-900414-m02)   </os>
	I0729 17:16:58.907208   29751 main.go:141] libmachine: (ha-900414-m02)   <devices>
	I0729 17:16:58.907219   29751 main.go:141] libmachine: (ha-900414-m02)     <disk type='file' device='cdrom'>
	I0729 17:16:58.907236   29751 main.go:141] libmachine: (ha-900414-m02)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/boot2docker.iso'/>
	I0729 17:16:58.907247   29751 main.go:141] libmachine: (ha-900414-m02)       <target dev='hdc' bus='scsi'/>
	I0729 17:16:58.907269   29751 main.go:141] libmachine: (ha-900414-m02)       <readonly/>
	I0729 17:16:58.907287   29751 main.go:141] libmachine: (ha-900414-m02)     </disk>
	I0729 17:16:58.907297   29751 main.go:141] libmachine: (ha-900414-m02)     <disk type='file' device='disk'>
	I0729 17:16:58.907314   29751 main.go:141] libmachine: (ha-900414-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 17:16:58.907330   29751 main.go:141] libmachine: (ha-900414-m02)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/ha-900414-m02.rawdisk'/>
	I0729 17:16:58.907341   29751 main.go:141] libmachine: (ha-900414-m02)       <target dev='hda' bus='virtio'/>
	I0729 17:16:58.907352   29751 main.go:141] libmachine: (ha-900414-m02)     </disk>
	I0729 17:16:58.907363   29751 main.go:141] libmachine: (ha-900414-m02)     <interface type='network'>
	I0729 17:16:58.907372   29751 main.go:141] libmachine: (ha-900414-m02)       <source network='mk-ha-900414'/>
	I0729 17:16:58.907382   29751 main.go:141] libmachine: (ha-900414-m02)       <model type='virtio'/>
	I0729 17:16:58.907393   29751 main.go:141] libmachine: (ha-900414-m02)     </interface>
	I0729 17:16:58.907408   29751 main.go:141] libmachine: (ha-900414-m02)     <interface type='network'>
	I0729 17:16:58.907420   29751 main.go:141] libmachine: (ha-900414-m02)       <source network='default'/>
	I0729 17:16:58.907428   29751 main.go:141] libmachine: (ha-900414-m02)       <model type='virtio'/>
	I0729 17:16:58.907438   29751 main.go:141] libmachine: (ha-900414-m02)     </interface>
	I0729 17:16:58.907450   29751 main.go:141] libmachine: (ha-900414-m02)     <serial type='pty'>
	I0729 17:16:58.907459   29751 main.go:141] libmachine: (ha-900414-m02)       <target port='0'/>
	I0729 17:16:58.907468   29751 main.go:141] libmachine: (ha-900414-m02)     </serial>
	I0729 17:16:58.907479   29751 main.go:141] libmachine: (ha-900414-m02)     <console type='pty'>
	I0729 17:16:58.907493   29751 main.go:141] libmachine: (ha-900414-m02)       <target type='serial' port='0'/>
	I0729 17:16:58.907505   29751 main.go:141] libmachine: (ha-900414-m02)     </console>
	I0729 17:16:58.907515   29751 main.go:141] libmachine: (ha-900414-m02)     <rng model='virtio'>
	I0729 17:16:58.907526   29751 main.go:141] libmachine: (ha-900414-m02)       <backend model='random'>/dev/random</backend>
	I0729 17:16:58.907536   29751 main.go:141] libmachine: (ha-900414-m02)     </rng>
	I0729 17:16:58.907542   29751 main.go:141] libmachine: (ha-900414-m02)     
	I0729 17:16:58.907548   29751 main.go:141] libmachine: (ha-900414-m02)     
	I0729 17:16:58.907558   29751 main.go:141] libmachine: (ha-900414-m02)   </devices>
	I0729 17:16:58.907567   29751 main.go:141] libmachine: (ha-900414-m02) </domain>
	I0729 17:16:58.907593   29751 main.go:141] libmachine: (ha-900414-m02) 
	I0729 17:16:58.913793   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:d1:4d:17 in network default
	I0729 17:16:58.914411   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:16:58.914427   29751 main.go:141] libmachine: (ha-900414-m02) Ensuring networks are active...
	I0729 17:16:58.915106   29751 main.go:141] libmachine: (ha-900414-m02) Ensuring network default is active
	I0729 17:16:58.915393   29751 main.go:141] libmachine: (ha-900414-m02) Ensuring network mk-ha-900414 is active
	I0729 17:16:58.915695   29751 main.go:141] libmachine: (ha-900414-m02) Getting domain xml...
	I0729 17:16:58.916367   29751 main.go:141] libmachine: (ha-900414-m02) Creating domain...
	I0729 17:17:00.619006   29751 main.go:141] libmachine: (ha-900414-m02) Waiting to get IP...
	I0729 17:17:00.619723   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:00.620118   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:00.620142   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:00.620098   30168 retry.go:31] will retry after 188.399655ms: waiting for machine to come up
	I0729 17:17:00.810510   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:00.811048   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:00.811072   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:00.810990   30168 retry.go:31] will retry after 292.630472ms: waiting for machine to come up
	I0729 17:17:01.105586   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:01.106002   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:01.106039   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:01.105964   30168 retry.go:31] will retry after 319.398962ms: waiting for machine to come up
	I0729 17:17:01.428994   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:01.429539   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:01.429566   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:01.429502   30168 retry.go:31] will retry after 464.509758ms: waiting for machine to come up
	I0729 17:17:01.895053   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:01.895562   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:01.895592   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:01.895517   30168 retry.go:31] will retry after 484.399614ms: waiting for machine to come up
	I0729 17:17:02.381074   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:02.381631   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:02.381686   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:02.381606   30168 retry.go:31] will retry after 860.971027ms: waiting for machine to come up
	I0729 17:17:03.243726   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:03.244282   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:03.244341   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:03.244265   30168 retry.go:31] will retry after 863.225264ms: waiting for machine to come up
	I0729 17:17:04.108705   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:04.109216   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:04.109244   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:04.109172   30168 retry.go:31] will retry after 1.020483871s: waiting for machine to come up
	I0729 17:17:05.131433   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:05.131910   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:05.131935   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:05.131848   30168 retry.go:31] will retry after 1.375261619s: waiting for machine to come up
	I0729 17:17:06.509382   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:06.509825   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:06.509852   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:06.509790   30168 retry.go:31] will retry after 2.25713359s: waiting for machine to come up
	I0729 17:17:08.768596   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:08.769231   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:08.769260   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:08.769187   30168 retry.go:31] will retry after 2.235550458s: waiting for machine to come up
	I0729 17:17:11.007553   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:11.008004   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:11.008021   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:11.007976   30168 retry.go:31] will retry after 2.417813916s: waiting for machine to come up
	I0729 17:17:13.427492   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:13.427953   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:13.427980   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:13.427908   30168 retry.go:31] will retry after 4.370715986s: waiting for machine to come up
	I0729 17:17:17.803728   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:17.804160   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find current IP address of domain ha-900414-m02 in network mk-ha-900414
	I0729 17:17:17.804188   29751 main.go:141] libmachine: (ha-900414-m02) DBG | I0729 17:17:17.804120   30168 retry.go:31] will retry after 3.853692825s: waiting for machine to come up
	I0729 17:17:21.659016   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:21.659460   29751 main.go:141] libmachine: (ha-900414-m02) Found IP for machine: 192.168.39.111
	I0729 17:17:21.659486   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:21.659495   29751 main.go:141] libmachine: (ha-900414-m02) Reserving static IP address...
	I0729 17:17:21.659832   29751 main.go:141] libmachine: (ha-900414-m02) DBG | unable to find host DHCP lease matching {name: "ha-900414-m02", mac: "52:54:00:a0:84:83", ip: "192.168.39.111"} in network mk-ha-900414
	I0729 17:17:21.731281   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Getting to WaitForSSH function...
	I0729 17:17:21.731311   29751 main.go:141] libmachine: (ha-900414-m02) Reserved static IP address: 192.168.39.111
	I0729 17:17:21.731324   29751 main.go:141] libmachine: (ha-900414-m02) Waiting for SSH to be available...
	I0729 17:17:21.733654   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:21.734150   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:21.734177   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:21.734279   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Using SSH client type: external
	I0729 17:17:21.734303   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa (-rw-------)
	I0729 17:17:21.734329   29751 main.go:141] libmachine: (ha-900414-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:17:21.734342   29751 main.go:141] libmachine: (ha-900414-m02) DBG | About to run SSH command:
	I0729 17:17:21.734371   29751 main.go:141] libmachine: (ha-900414-m02) DBG | exit 0
	I0729 17:17:21.854563   29751 main.go:141] libmachine: (ha-900414-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 17:17:21.854851   29751 main.go:141] libmachine: (ha-900414-m02) KVM machine creation complete!
	I0729 17:17:21.855119   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetConfigRaw
	I0729 17:17:21.855699   29751 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:17:21.855898   29751 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:17:21.856085   29751 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 17:17:21.856101   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetState
	I0729 17:17:21.857273   29751 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 17:17:21.857288   29751 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 17:17:21.857296   29751 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 17:17:21.857303   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:21.859622   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:21.860022   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:21.860050   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:21.860171   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:21.860343   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:21.860499   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:21.860656   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:21.860857   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:17:21.861112   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0729 17:17:21.861133   29751 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 17:17:21.957577   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:17:21.957598   29751 main.go:141] libmachine: Detecting the provisioner...
	I0729 17:17:21.957609   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:21.960289   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:21.960658   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:21.960683   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:21.960879   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:21.961042   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:21.961190   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:21.961335   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:21.961486   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:17:21.961640   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0729 17:17:21.961651   29751 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 17:17:22.059224   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 17:17:22.059299   29751 main.go:141] libmachine: found compatible host: buildroot
	I0729 17:17:22.059309   29751 main.go:141] libmachine: Provisioning with buildroot...
	I0729 17:17:22.059317   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetMachineName
	I0729 17:17:22.059537   29751 buildroot.go:166] provisioning hostname "ha-900414-m02"
	I0729 17:17:22.059562   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetMachineName
	I0729 17:17:22.059774   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:22.062185   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.062523   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:22.062550   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.062672   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:22.062834   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:22.062990   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:22.063094   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:22.063260   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:17:22.063416   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0729 17:17:22.063426   29751 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-900414-m02 && echo "ha-900414-m02" | sudo tee /etc/hostname
	I0729 17:17:22.180800   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-900414-m02
	
	I0729 17:17:22.180830   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:22.183377   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.183784   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:22.183811   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.183965   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:22.184142   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:22.184301   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:22.184440   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:22.184599   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:17:22.184750   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0729 17:17:22.184765   29751 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-900414-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-900414-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-900414-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:17:22.291289   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:17:22.291314   29751 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 17:17:22.291333   29751 buildroot.go:174] setting up certificates
	I0729 17:17:22.291344   29751 provision.go:84] configureAuth start
	I0729 17:17:22.291355   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetMachineName
	I0729 17:17:22.291638   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetIP
	I0729 17:17:22.294329   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.294679   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:22.294704   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.294918   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:22.297031   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.297352   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:22.297378   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.297495   29751 provision.go:143] copyHostCerts
	I0729 17:17:22.297526   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:17:22.297566   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 17:17:22.297575   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:17:22.297645   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 17:17:22.297747   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:17:22.297772   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 17:17:22.297785   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:17:22.297829   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 17:17:22.297899   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:17:22.297923   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 17:17:22.297933   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:17:22.297974   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 17:17:22.298039   29751 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.ha-900414-m02 san=[127.0.0.1 192.168.39.111 ha-900414-m02 localhost minikube]
	I0729 17:17:22.640633   29751 provision.go:177] copyRemoteCerts
	I0729 17:17:22.640687   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:17:22.640711   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:22.643109   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.643428   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:22.643467   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.643664   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:22.643876   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:22.644058   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:22.644207   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa Username:docker}
	I0729 17:17:22.724646   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:17:22.724714   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 17:17:22.752409   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:17:22.752479   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:17:22.776188   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:17:22.776242   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 17:17:22.799494   29751 provision.go:87] duration metric: took 508.139423ms to configureAuth
	I0729 17:17:22.799514   29751 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:17:22.799685   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:17:22.799762   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:22.802126   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.802649   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:22.802687   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:22.802906   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:22.803153   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:22.803310   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:22.803458   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:22.803590   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:17:22.803750   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0729 17:17:22.803769   29751 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:17:23.070079   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:17:23.070100   29751 main.go:141] libmachine: Checking connection to Docker...
	I0729 17:17:23.070108   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetURL
	I0729 17:17:23.071565   29751 main.go:141] libmachine: (ha-900414-m02) DBG | Using libvirt version 6000000
	I0729 17:17:23.073565   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.073827   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:23.073851   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.074009   29751 main.go:141] libmachine: Docker is up and running!
	I0729 17:17:23.074026   29751 main.go:141] libmachine: Reticulating splines...
	I0729 17:17:23.074032   29751 client.go:171] duration metric: took 24.53169471s to LocalClient.Create
	I0729 17:17:23.074049   29751 start.go:167] duration metric: took 24.531741976s to libmachine.API.Create "ha-900414"
	I0729 17:17:23.074058   29751 start.go:293] postStartSetup for "ha-900414-m02" (driver="kvm2")
	I0729 17:17:23.074067   29751 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:17:23.074090   29751 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:17:23.074354   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:17:23.074415   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:23.076745   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.077166   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:23.077200   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.077379   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:23.077584   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:23.077742   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:23.077886   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa Username:docker}
	I0729 17:17:23.156548   29751 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:17:23.160575   29751 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:17:23.160600   29751 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 17:17:23.160663   29751 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 17:17:23.160750   29751 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 17:17:23.160764   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /etc/ssl/certs/183932.pem
	I0729 17:17:23.160882   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:17:23.169949   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:17:23.192728   29751 start.go:296] duration metric: took 118.657263ms for postStartSetup
	I0729 17:17:23.192802   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetConfigRaw
	I0729 17:17:23.193367   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetIP
	I0729 17:17:23.195993   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.196313   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:23.196341   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.196551   29751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:17:23.196742   29751 start.go:128] duration metric: took 24.671779175s to createHost
	I0729 17:17:23.196763   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:23.199094   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.199406   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:23.199432   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.199623   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:23.199825   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:23.199972   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:23.200100   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:23.200263   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:17:23.200432   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0729 17:17:23.200447   29751 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:17:23.298817   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722273443.257470758
	
	I0729 17:17:23.298841   29751 fix.go:216] guest clock: 1722273443.257470758
	I0729 17:17:23.298849   29751 fix.go:229] Guest: 2024-07-29 17:17:23.257470758 +0000 UTC Remote: 2024-07-29 17:17:23.196753922 +0000 UTC m=+83.553013806 (delta=60.716836ms)
	I0729 17:17:23.298873   29751 fix.go:200] guest clock delta is within tolerance: 60.716836ms
	I0729 17:17:23.298878   29751 start.go:83] releasing machines lock for "ha-900414-m02", held for 24.773992971s
	I0729 17:17:23.298896   29751 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:17:23.299203   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetIP
	I0729 17:17:23.301678   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.302011   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:23.302039   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.304375   29751 out.go:177] * Found network options:
	I0729 17:17:23.305631   29751 out.go:177]   - NO_PROXY=192.168.39.114
	W0729 17:17:23.306797   29751 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 17:17:23.306829   29751 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:17:23.307291   29751 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:17:23.307456   29751 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:17:23.307517   29751 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:17:23.307561   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	W0729 17:17:23.307639   29751 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 17:17:23.307724   29751 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:17:23.307744   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:17:23.310211   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.310603   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:23.310631   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.310672   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.310757   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:23.310902   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:23.311070   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:23.311107   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:23.311139   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:23.311199   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa Username:docker}
	I0729 17:17:23.311276   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:17:23.311431   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:17:23.311588   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:17:23.311721   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa Username:docker}
	I0729 17:17:23.542543   29751 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:17:23.548943   29751 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:17:23.549006   29751 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:17:23.565708   29751 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 17:17:23.565731   29751 start.go:495] detecting cgroup driver to use...
	I0729 17:17:23.565799   29751 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:17:23.582146   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:17:23.595882   29751 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:17:23.595932   29751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:17:23.609950   29751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:17:23.623881   29751 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:17:23.740433   29751 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:17:23.890709   29751 docker.go:233] disabling docker service ...
	I0729 17:17:23.890793   29751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:17:23.905201   29751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:17:23.918576   29751 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:17:24.057759   29751 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:17:24.165940   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:17:24.180233   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:17:24.198828   29751 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:17:24.198905   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:17:24.209360   29751 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:17:24.209411   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:17:24.219742   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:17:24.229772   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:17:24.239876   29751 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:17:24.250101   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:17:24.260133   29751 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:17:24.276593   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:17:24.286837   29751 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:17:24.295829   29751 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 17:17:24.295882   29751 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 17:17:24.308286   29751 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:17:24.317390   29751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:17:24.438381   29751 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:17:24.575355   29751 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:17:24.575427   29751 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:17:24.580382   29751 start.go:563] Will wait 60s for crictl version
	I0729 17:17:24.580435   29751 ssh_runner.go:195] Run: which crictl
	I0729 17:17:24.584163   29751 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:17:24.623041   29751 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:17:24.623126   29751 ssh_runner.go:195] Run: crio --version
	I0729 17:17:24.651578   29751 ssh_runner.go:195] Run: crio --version
	I0729 17:17:24.679198   29751 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:17:24.680823   29751 out.go:177]   - env NO_PROXY=192.168.39.114
	I0729 17:17:24.681949   29751 main.go:141] libmachine: (ha-900414-m02) Calling .GetIP
	I0729 17:17:24.684319   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:24.684652   29751 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:17:13 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:17:24.684678   29751 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:17:24.684857   29751 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:17:24.689245   29751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:17:24.702628   29751 mustload.go:65] Loading cluster: ha-900414
	I0729 17:17:24.702858   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:17:24.703235   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:17:24.703267   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:17:24.718581   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40361
	I0729 17:17:24.719166   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:17:24.719615   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:17:24.719632   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:17:24.719974   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:17:24.720165   29751 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:17:24.721752   29751 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:17:24.722123   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:17:24.722164   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:17:24.736191   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46521
	I0729 17:17:24.736563   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:17:24.736978   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:17:24.737000   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:17:24.737303   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:17:24.737480   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:17:24.737637   29751 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414 for IP: 192.168.39.111
	I0729 17:17:24.737652   29751 certs.go:194] generating shared ca certs ...
	I0729 17:17:24.737673   29751 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:17:24.737822   29751 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 17:17:24.737875   29751 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 17:17:24.737886   29751 certs.go:256] generating profile certs ...
	I0729 17:17:24.737954   29751 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key
	I0729 17:17:24.737981   29751 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.6155267f
	I0729 17:17:24.737997   29751 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.6155267f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.111 192.168.39.254]
	I0729 17:17:24.872649   29751 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.6155267f ...
	I0729 17:17:24.872681   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.6155267f: {Name:mkd7e35496498bf0055f677e97a30422901015d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:17:24.872892   29751 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.6155267f ...
	I0729 17:17:24.872910   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.6155267f: {Name:mk12b1d3199513cca10afd617c4d659c36c472c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:17:24.873035   29751 certs.go:381] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.6155267f -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt
	I0729 17:17:24.873187   29751 certs.go:385] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.6155267f -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key
	I0729 17:17:24.873322   29751 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key
	I0729 17:17:24.873336   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 17:17:24.873350   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 17:17:24.873365   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 17:17:24.873381   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 17:17:24.873396   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 17:17:24.873411   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 17:17:24.873425   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 17:17:24.873436   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 17:17:24.873492   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 17:17:24.873523   29751 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 17:17:24.873533   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:17:24.873564   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:17:24.873591   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:17:24.873617   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 17:17:24.873659   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:17:24.873721   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:17:24.873743   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem -> /usr/share/ca-certificates/18393.pem
	I0729 17:17:24.873763   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /usr/share/ca-certificates/183932.pem
	I0729 17:17:24.873808   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:17:24.877514   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:17:24.878013   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:17:24.878037   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:17:24.878240   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:17:24.878469   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:17:24.878624   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:17:24.878788   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:17:24.958760   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 17:17:24.963511   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 17:17:24.977560   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 17:17:24.983710   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0729 17:17:24.993985   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 17:17:24.998592   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 17:17:25.008591   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 17:17:25.012811   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0729 17:17:25.022725   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 17:17:25.026862   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 17:17:25.037182   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 17:17:25.041425   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 17:17:25.052018   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:17:25.078408   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:17:25.103576   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:17:25.128493   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:17:25.152760   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 17:17:25.176177   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 17:17:25.199440   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:17:25.222882   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:17:25.247585   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:17:25.271558   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 17:17:25.294940   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 17:17:25.318779   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 17:17:25.335350   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0729 17:17:25.351545   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 17:17:25.367375   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0729 17:17:25.383568   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 17:17:25.401037   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 17:17:25.418529   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 17:17:25.436375   29751 ssh_runner.go:195] Run: openssl version
	I0729 17:17:25.442117   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:17:25.452592   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:17:25.457023   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:17:25.457071   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:17:25.463311   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:17:25.473973   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 17:17:25.484597   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 17:17:25.488960   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 17:17:25.489009   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 17:17:25.494838   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 17:17:25.508665   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 17:17:25.519756   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 17:17:25.524362   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 17:17:25.524400   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 17:17:25.529987   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:17:25.540158   29751 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:17:25.544233   29751 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 17:17:25.544291   29751 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.30.3 crio true true} ...
	I0729 17:17:25.544369   29751 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-900414-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:17:25.544392   29751 kube-vip.go:115] generating kube-vip config ...
	I0729 17:17:25.544425   29751 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 17:17:25.561651   29751 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 17:17:25.561720   29751 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 17:17:25.561780   29751 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:17:25.573335   29751 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 17:17:25.573403   29751 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 17:17:25.584187   29751 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 17:17:25.584215   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 17:17:25.584277   29751 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 17:17:25.584298   29751 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 17:17:25.584317   29751 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 17:17:25.589102   29751 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 17:17:25.589127   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 17:17:26.426450   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:17:26.440486   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 17:17:26.440581   29751 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 17:17:26.444763   29751 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 17:17:26.444796   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 17:17:28.867115   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 17:17:28.867192   29751 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 17:17:28.872102   29751 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 17:17:28.872129   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 17:17:29.084956   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 17:17:29.094366   29751 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 17:17:29.110915   29751 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:17:29.126950   29751 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 17:17:29.143874   29751 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 17:17:29.148322   29751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:17:29.161396   29751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:17:29.288462   29751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:17:29.306423   29751 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:17:29.306884   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:17:29.306935   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:17:29.321781   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33867
	I0729 17:17:29.322256   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:17:29.322797   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:17:29.322822   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:17:29.323144   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:17:29.323324   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:17:29.323436   29751 start.go:317] joinCluster: &{Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:17:29.323528   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 17:17:29.323548   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:17:29.326494   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:17:29.326884   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:17:29.326913   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:17:29.327089   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:17:29.327357   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:17:29.327543   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:17:29.327687   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:17:29.484457   29751 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:17:29.484526   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dg0ylq.mtxxgcl7pxnl45i3 --discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-900414-m02 --control-plane --apiserver-advertise-address=192.168.39.111 --apiserver-bind-port=8443"
	I0729 17:17:54.621435   29751 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dg0ylq.mtxxgcl7pxnl45i3 --discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-900414-m02 --control-plane --apiserver-advertise-address=192.168.39.111 --apiserver-bind-port=8443": (25.136878808s)
	I0729 17:17:54.621488   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 17:17:55.218635   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-900414-m02 minikube.k8s.io/updated_at=2024_07_29T17_17_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=ha-900414 minikube.k8s.io/primary=false
	I0729 17:17:55.355904   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-900414-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 17:17:55.491251   29751 start.go:319] duration metric: took 26.167808458s to joinCluster
	I0729 17:17:55.491328   29751 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:17:55.491643   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:17:55.493003   29751 out.go:177] * Verifying Kubernetes components...
	I0729 17:17:55.494406   29751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:17:55.854120   29751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:17:55.886164   29751 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 17:17:55.886408   29751 kapi.go:59] client config for ha-900414: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.crt", KeyFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key", CAFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 17:17:55.886474   29751 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.114:8443
	I0729 17:17:55.886694   29751 node_ready.go:35] waiting up to 6m0s for node "ha-900414-m02" to be "Ready" ...
	I0729 17:17:55.886787   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:55.886794   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:55.886801   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:55.886804   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:55.908576   29751 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0729 17:17:56.387630   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:56.387659   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:56.387671   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:56.387680   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:56.393305   29751 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:17:56.887572   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:56.887591   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:56.887599   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:56.887605   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:56.891861   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:17:57.387556   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:57.387575   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:57.387584   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:57.387588   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:57.390769   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:17:57.886976   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:57.887008   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:57.887016   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:57.887022   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:57.890215   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:17:57.890674   29751 node_ready.go:53] node "ha-900414-m02" has status "Ready":"False"
	I0729 17:17:58.387248   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:58.387271   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:58.387281   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:58.387286   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:58.390520   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:17:58.887094   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:58.887124   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:58.887135   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:58.887141   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:58.890309   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:17:59.387577   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:59.387596   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:59.387604   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:59.387610   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:59.390735   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:17:59.887548   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:17:59.887569   29751 round_trippers.go:469] Request Headers:
	I0729 17:17:59.887577   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:17:59.887581   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:17:59.890851   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:17:59.891615   29751 node_ready.go:53] node "ha-900414-m02" has status "Ready":"False"
	I0729 17:18:00.386953   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:00.386977   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:00.386985   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:00.386988   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:00.390166   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:00.887111   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:00.887130   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:00.887144   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:00.887149   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:00.890789   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:01.387760   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:01.387789   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:01.387801   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:01.387809   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:01.390717   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:18:01.887627   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:01.887650   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:01.887662   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:01.887666   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:01.891414   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:01.892166   29751 node_ready.go:53] node "ha-900414-m02" has status "Ready":"False"
	I0729 17:18:02.387147   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:02.387171   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:02.387181   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:02.387187   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:02.391744   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:18:02.887130   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:02.887150   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:02.887158   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:02.887165   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:02.891177   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:03.387218   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:03.387242   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:03.387255   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:03.387261   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:03.391546   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:18:03.887097   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:03.887119   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:03.887128   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:03.887131   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:03.891034   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:04.386829   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:04.386868   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:04.386877   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:04.386881   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:04.390484   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:04.391532   29751 node_ready.go:53] node "ha-900414-m02" has status "Ready":"False"
	I0729 17:18:04.887415   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:04.887437   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:04.887445   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:04.887449   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:04.891190   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:05.387227   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:05.387246   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:05.387254   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:05.387258   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:05.392017   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:18:05.887597   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:05.887620   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:05.887627   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:05.887630   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:05.890989   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:06.386930   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:06.386953   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:06.386961   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:06.386964   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:06.390026   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:06.887322   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:06.887344   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:06.887352   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:06.887355   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:06.890903   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:06.891449   29751 node_ready.go:53] node "ha-900414-m02" has status "Ready":"False"
	I0729 17:18:07.387915   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:07.387941   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:07.387954   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:07.387962   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:07.392125   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:18:07.886969   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:07.886991   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:07.886999   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:07.887006   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:07.890141   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:08.387240   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:08.387261   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:08.387269   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:08.387275   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:08.389917   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:18:08.887289   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:08.887313   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:08.887321   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:08.887324   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:08.890473   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:09.387176   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:09.387197   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.387209   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.387215   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.390670   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:09.391446   29751 node_ready.go:53] node "ha-900414-m02" has status "Ready":"False"
	I0729 17:18:09.887268   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:09.887288   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.887296   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.887301   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.890603   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:09.891231   29751 node_ready.go:49] node "ha-900414-m02" has status "Ready":"True"
	I0729 17:18:09.891246   29751 node_ready.go:38] duration metric: took 14.004538508s for node "ha-900414-m02" to be "Ready" ...
	I0729 17:18:09.891257   29751 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:18:09.891316   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:18:09.891327   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.891337   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.891344   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.897157   29751 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:18:09.903216   29751 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-48j6w" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:09.903304   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-48j6w
	I0729 17:18:09.903315   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.903325   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.903334   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.906411   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:09.907082   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:09.907101   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.907111   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.907116   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.910206   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:09.910796   29751 pod_ready.go:92] pod "coredns-7db6d8ff4d-48j6w" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:09.910817   29751 pod_ready.go:81] duration metric: took 7.577654ms for pod "coredns-7db6d8ff4d-48j6w" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:09.910829   29751 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9r87x" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:09.910889   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9r87x
	I0729 17:18:09.910897   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.910905   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.910910   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.915990   29751 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:18:09.916658   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:09.916675   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.916682   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.916688   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.919834   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:09.920446   29751 pod_ready.go:92] pod "coredns-7db6d8ff4d-9r87x" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:09.920463   29751 pod_ready.go:81] duration metric: took 9.626107ms for pod "coredns-7db6d8ff4d-9r87x" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:09.920473   29751 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:09.920525   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-900414
	I0729 17:18:09.920535   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.920545   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.920553   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.925501   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:18:09.926713   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:09.926725   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.926735   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.926740   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.932445   29751 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:18:09.933590   29751 pod_ready.go:92] pod "etcd-ha-900414" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:09.933607   29751 pod_ready.go:81] duration metric: took 13.127022ms for pod "etcd-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:09.933618   29751 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:09.933669   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-900414-m02
	I0729 17:18:09.933681   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.933690   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.933698   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.937108   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:09.937740   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:09.937754   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:09.937763   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:09.937769   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:09.940278   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:18:09.940802   29751 pod_ready.go:92] pod "etcd-ha-900414-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:09.940814   29751 pod_ready.go:81] duration metric: took 7.189004ms for pod "etcd-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:09.940831   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:10.088170   29751 request.go:629] Waited for 147.28026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414
	I0729 17:18:10.088233   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414
	I0729 17:18:10.088241   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:10.088252   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:10.088260   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:10.091479   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:10.287527   29751 request.go:629] Waited for 195.283397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:10.287593   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:10.287599   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:10.287607   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:10.287611   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:10.290769   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:10.291502   29751 pod_ready.go:92] pod "kube-apiserver-ha-900414" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:10.291522   29751 pod_ready.go:81] duration metric: took 350.680754ms for pod "kube-apiserver-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:10.291535   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:10.487521   29751 request.go:629] Waited for 195.924793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414-m02
	I0729 17:18:10.487588   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414-m02
	I0729 17:18:10.487615   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:10.487622   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:10.487627   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:10.492111   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:18:10.688171   29751 request.go:629] Waited for 195.330567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:10.688243   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:10.688250   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:10.688260   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:10.688268   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:10.691797   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:10.692474   29751 pod_ready.go:92] pod "kube-apiserver-ha-900414-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:10.692492   29751 pod_ready.go:81] duration metric: took 400.948997ms for pod "kube-apiserver-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:10.692507   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:10.887595   29751 request.go:629] Waited for 195.024359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414
	I0729 17:18:10.887652   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414
	I0729 17:18:10.887657   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:10.887665   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:10.887669   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:10.891054   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:11.088138   29751 request.go:629] Waited for 196.403846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:11.088238   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:11.088249   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:11.088265   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:11.088276   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:11.091389   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:11.091992   29751 pod_ready.go:92] pod "kube-controller-manager-ha-900414" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:11.092006   29751 pod_ready.go:81] duration metric: took 399.489771ms for pod "kube-controller-manager-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:11.092015   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:11.288104   29751 request.go:629] Waited for 196.035602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414-m02
	I0729 17:18:11.288172   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414-m02
	I0729 17:18:11.288179   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:11.288189   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:11.288200   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:11.293624   29751 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:18:11.487678   29751 request.go:629] Waited for 193.334234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:11.487740   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:11.487745   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:11.487753   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:11.487758   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:11.491175   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:11.491594   29751 pod_ready.go:92] pod "kube-controller-manager-ha-900414-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:11.491612   29751 pod_ready.go:81] duration metric: took 399.590285ms for pod "kube-controller-manager-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:11.491624   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bgq99" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:11.687816   29751 request.go:629] Waited for 196.119916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bgq99
	I0729 17:18:11.687890   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bgq99
	I0729 17:18:11.687896   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:11.687904   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:11.687907   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:11.691417   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:11.888300   29751 request.go:629] Waited for 196.368766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:11.888408   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:11.888420   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:11.888434   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:11.888446   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:11.891967   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:11.892484   29751 pod_ready.go:92] pod "kube-proxy-bgq99" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:11.892501   29751 pod_ready.go:81] duration metric: took 400.869993ms for pod "kube-proxy-bgq99" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:11.892510   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tng4t" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:12.087671   29751 request.go:629] Waited for 195.094842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tng4t
	I0729 17:18:12.087728   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tng4t
	I0729 17:18:12.087734   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:12.087741   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:12.087745   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:12.091271   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:12.287380   29751 request.go:629] Waited for 195.269885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:12.287443   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:12.287455   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:12.287471   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:12.287481   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:12.291276   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:12.291896   29751 pod_ready.go:92] pod "kube-proxy-tng4t" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:12.291920   29751 pod_ready.go:81] duration metric: took 399.402647ms for pod "kube-proxy-tng4t" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:12.291929   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:12.488009   29751 request.go:629] Waited for 196.00899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414
	I0729 17:18:12.488062   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414
	I0729 17:18:12.488067   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:12.488075   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:12.488078   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:12.491312   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:12.688180   29751 request.go:629] Waited for 196.383034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:12.688232   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:18:12.688237   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:12.688245   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:12.688248   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:12.696022   29751 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 17:18:12.696490   29751 pod_ready.go:92] pod "kube-scheduler-ha-900414" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:12.696507   29751 pod_ready.go:81] duration metric: took 404.57204ms for pod "kube-scheduler-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:12.696516   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:12.887592   29751 request.go:629] Waited for 190.996178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414-m02
	I0729 17:18:12.887648   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414-m02
	I0729 17:18:12.887654   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:12.887663   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:12.887668   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:12.890813   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:13.087839   29751 request.go:629] Waited for 196.380669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:13.087913   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:18:13.087925   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:13.087934   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:13.087942   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:13.091231   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:13.092267   29751 pod_ready.go:92] pod "kube-scheduler-ha-900414-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:18:13.092283   29751 pod_ready.go:81] duration metric: took 395.761219ms for pod "kube-scheduler-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:18:13.092294   29751 pod_ready.go:38] duration metric: took 3.201024864s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:18:13.092318   29751 api_server.go:52] waiting for apiserver process to appear ...
	I0729 17:18:13.092371   29751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:18:13.109831   29751 api_server.go:72] duration metric: took 17.618467467s to wait for apiserver process to appear ...
	I0729 17:18:13.109873   29751 api_server.go:88] waiting for apiserver healthz status ...
	I0729 17:18:13.109904   29751 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I0729 17:18:13.113869   29751 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I0729 17:18:13.113926   29751 round_trippers.go:463] GET https://192.168.39.114:8443/version
	I0729 17:18:13.113935   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:13.113944   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:13.113954   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:13.114730   29751 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 17:18:13.114838   29751 api_server.go:141] control plane version: v1.30.3
	I0729 17:18:13.114859   29751 api_server.go:131] duration metric: took 4.976083ms to wait for apiserver health ...
	I0729 17:18:13.114868   29751 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 17:18:13.288310   29751 request.go:629] Waited for 173.35802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:18:13.288380   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:18:13.288393   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:13.288407   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:13.288419   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:13.294660   29751 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 17:18:13.299447   29751 system_pods.go:59] 17 kube-system pods found
	I0729 17:18:13.299478   29751 system_pods.go:61] "coredns-7db6d8ff4d-48j6w" [306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d] Running
	I0729 17:18:13.299485   29751 system_pods.go:61] "coredns-7db6d8ff4d-9r87x" [fcc4709f-f07b-4694-a352-aedd9c67bbb2] Running
	I0729 17:18:13.299493   29751 system_pods.go:61] "etcd-ha-900414" [96243a16-1b51-4136-bc25-f3a0da2f7500] Running
	I0729 17:18:13.299503   29751 system_pods.go:61] "etcd-ha-900414-m02" [29c61208-cebd-4d6b-addf-426efcc78899] Running
	I0729 17:18:13.299508   29751 system_pods.go:61] "kindnet-kdzhk" [d86b52ee-7d4c-4530-afa1-88cf8ad77379] Running
	I0729 17:18:13.299513   29751 system_pods.go:61] "kindnet-z9cvz" [c2177daa-4efb-478c-845f-f30e77e91684] Running
	I0729 17:18:13.299519   29751 system_pods.go:61] "kube-apiserver-ha-900414" [2a4045e8-a900-4ebd-b36e-95083ab251c9] Running
	I0729 17:18:13.299523   29751 system_pods.go:61] "kube-apiserver-ha-900414-m02" [28c2e5cf-876b-4b77-b9c7-406642dc4df6] Running
	I0729 17:18:13.299527   29751 system_pods.go:61] "kube-controller-manager-ha-900414" [62bb9ded-db08-49a0-aea4-8806d0e8d294] Running
	I0729 17:18:13.299530   29751 system_pods.go:61] "kube-controller-manager-ha-900414-m02" [88418c96-4611-4276-91c6-ae9b67d4ae74] Running
	I0729 17:18:13.299533   29751 system_pods.go:61] "kube-proxy-bgq99" [0258cc44-f6ff-4294-a621-61b172247e15] Running
	I0729 17:18:13.299536   29751 system_pods.go:61] "kube-proxy-tng4t" [2303269f-50d3-4a63-aa76-891f001e6f5d] Running
	I0729 17:18:13.299539   29751 system_pods.go:61] "kube-scheduler-ha-900414" [3d41b818-c8ad-4dbb-bc7b-73f578d33539] Running
	I0729 17:18:13.299542   29751 system_pods.go:61] "kube-scheduler-ha-900414-m02" [f9cc318d-be18-4858-9712-b92f11027b65] Running
	I0729 17:18:13.299545   29751 system_pods.go:61] "kube-vip-ha-900414" [bf3918b4-6cc5-499b-808e-b6c33138cae2] Running
	I0729 17:18:13.299548   29751 system_pods.go:61] "kube-vip-ha-900414-m02" [9fad8ffb-6d3c-44ba-9700-e0e4d70a5f71] Running
	I0729 17:18:13.299551   29751 system_pods.go:61] "storage-provisioner" [50fa96e8-1ee5-4e09-a734-802dbcd02bcc] Running
	I0729 17:18:13.299557   29751 system_pods.go:74] duration metric: took 184.679927ms to wait for pod list to return data ...
	I0729 17:18:13.299567   29751 default_sa.go:34] waiting for default service account to be created ...
	I0729 17:18:13.488051   29751 request.go:629] Waited for 188.41705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I0729 17:18:13.488136   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I0729 17:18:13.488150   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:13.488159   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:13.488167   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:13.491332   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:13.491567   29751 default_sa.go:45] found service account: "default"
	I0729 17:18:13.491583   29751 default_sa.go:55] duration metric: took 192.010397ms for default service account to be created ...
	I0729 17:18:13.491592   29751 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 17:18:13.688073   29751 request.go:629] Waited for 196.406178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:18:13.688138   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:18:13.688145   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:13.688155   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:13.688160   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:13.693426   29751 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:18:13.698826   29751 system_pods.go:86] 17 kube-system pods found
	I0729 17:18:13.698849   29751 system_pods.go:89] "coredns-7db6d8ff4d-48j6w" [306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d] Running
	I0729 17:18:13.698855   29751 system_pods.go:89] "coredns-7db6d8ff4d-9r87x" [fcc4709f-f07b-4694-a352-aedd9c67bbb2] Running
	I0729 17:18:13.698859   29751 system_pods.go:89] "etcd-ha-900414" [96243a16-1b51-4136-bc25-f3a0da2f7500] Running
	I0729 17:18:13.698864   29751 system_pods.go:89] "etcd-ha-900414-m02" [29c61208-cebd-4d6b-addf-426efcc78899] Running
	I0729 17:18:13.698868   29751 system_pods.go:89] "kindnet-kdzhk" [d86b52ee-7d4c-4530-afa1-88cf8ad77379] Running
	I0729 17:18:13.698873   29751 system_pods.go:89] "kindnet-z9cvz" [c2177daa-4efb-478c-845f-f30e77e91684] Running
	I0729 17:18:13.698877   29751 system_pods.go:89] "kube-apiserver-ha-900414" [2a4045e8-a900-4ebd-b36e-95083ab251c9] Running
	I0729 17:18:13.698881   29751 system_pods.go:89] "kube-apiserver-ha-900414-m02" [28c2e5cf-876b-4b77-b9c7-406642dc4df6] Running
	I0729 17:18:13.698886   29751 system_pods.go:89] "kube-controller-manager-ha-900414" [62bb9ded-db08-49a0-aea4-8806d0e8d294] Running
	I0729 17:18:13.698891   29751 system_pods.go:89] "kube-controller-manager-ha-900414-m02" [88418c96-4611-4276-91c6-ae9b67d4ae74] Running
	I0729 17:18:13.698897   29751 system_pods.go:89] "kube-proxy-bgq99" [0258cc44-f6ff-4294-a621-61b172247e15] Running
	I0729 17:18:13.698902   29751 system_pods.go:89] "kube-proxy-tng4t" [2303269f-50d3-4a63-aa76-891f001e6f5d] Running
	I0729 17:18:13.698905   29751 system_pods.go:89] "kube-scheduler-ha-900414" [3d41b818-c8ad-4dbb-bc7b-73f578d33539] Running
	I0729 17:18:13.698909   29751 system_pods.go:89] "kube-scheduler-ha-900414-m02" [f9cc318d-be18-4858-9712-b92f11027b65] Running
	I0729 17:18:13.698913   29751 system_pods.go:89] "kube-vip-ha-900414" [bf3918b4-6cc5-499b-808e-b6c33138cae2] Running
	I0729 17:18:13.698917   29751 system_pods.go:89] "kube-vip-ha-900414-m02" [9fad8ffb-6d3c-44ba-9700-e0e4d70a5f71] Running
	I0729 17:18:13.698920   29751 system_pods.go:89] "storage-provisioner" [50fa96e8-1ee5-4e09-a734-802dbcd02bcc] Running
	I0729 17:18:13.698927   29751 system_pods.go:126] duration metric: took 207.325939ms to wait for k8s-apps to be running ...
	I0729 17:18:13.698942   29751 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 17:18:13.698986   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:18:13.714277   29751 system_svc.go:56] duration metric: took 15.328082ms WaitForService to wait for kubelet
	I0729 17:18:13.714304   29751 kubeadm.go:582] duration metric: took 18.222944304s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:18:13.714324   29751 node_conditions.go:102] verifying NodePressure condition ...
	I0729 17:18:13.887756   29751 request.go:629] Waited for 173.332508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes
	I0729 17:18:13.887809   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes
	I0729 17:18:13.887814   29751 round_trippers.go:469] Request Headers:
	I0729 17:18:13.887821   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:18:13.887825   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:18:13.891192   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:18:13.891807   29751 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:18:13.891827   29751 node_conditions.go:123] node cpu capacity is 2
	I0729 17:18:13.891844   29751 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:18:13.891847   29751 node_conditions.go:123] node cpu capacity is 2
	I0729 17:18:13.891851   29751 node_conditions.go:105] duration metric: took 177.523205ms to run NodePressure ...
	I0729 17:18:13.891876   29751 start.go:241] waiting for startup goroutines ...
	I0729 17:18:13.891900   29751 start.go:255] writing updated cluster config ...
	I0729 17:18:13.893849   29751 out.go:177] 
	I0729 17:18:13.895505   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:18:13.895594   29751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:18:13.897076   29751 out.go:177] * Starting "ha-900414-m03" control-plane node in "ha-900414" cluster
	I0729 17:18:13.898077   29751 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:18:13.898093   29751 cache.go:56] Caching tarball of preloaded images
	I0729 17:18:13.898193   29751 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:18:13.898205   29751 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:18:13.898279   29751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:18:13.898447   29751 start.go:360] acquireMachinesLock for ha-900414-m03: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:18:13.898489   29751 start.go:364] duration metric: took 22.948µs to acquireMachinesLock for "ha-900414-m03"
	I0729 17:18:13.898503   29751 start.go:93] Provisioning new machine with config: &{Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:18:13.898590   29751 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 17:18:13.899887   29751 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:18:13.899984   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:18:13.900018   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:18:13.915789   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I0729 17:18:13.916164   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:18:13.916591   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:18:13.916616   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:18:13.916890   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:18:13.917032   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetMachineName
	I0729 17:18:13.917169   29751 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:18:13.917313   29751 start.go:159] libmachine.API.Create for "ha-900414" (driver="kvm2")
	I0729 17:18:13.917336   29751 client.go:168] LocalClient.Create starting
	I0729 17:18:13.917366   29751 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem
	I0729 17:18:13.917402   29751 main.go:141] libmachine: Decoding PEM data...
	I0729 17:18:13.917421   29751 main.go:141] libmachine: Parsing certificate...
	I0729 17:18:13.917486   29751 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem
	I0729 17:18:13.917516   29751 main.go:141] libmachine: Decoding PEM data...
	I0729 17:18:13.917534   29751 main.go:141] libmachine: Parsing certificate...
	I0729 17:18:13.917559   29751 main.go:141] libmachine: Running pre-create checks...
	I0729 17:18:13.917568   29751 main.go:141] libmachine: (ha-900414-m03) Calling .PreCreateCheck
	I0729 17:18:13.917752   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetConfigRaw
	I0729 17:18:13.918086   29751 main.go:141] libmachine: Creating machine...
	I0729 17:18:13.918102   29751 main.go:141] libmachine: (ha-900414-m03) Calling .Create
	I0729 17:18:13.918221   29751 main.go:141] libmachine: (ha-900414-m03) Creating KVM machine...
	I0729 17:18:13.919564   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found existing default KVM network
	I0729 17:18:13.919766   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found existing private KVM network mk-ha-900414
	I0729 17:18:13.919919   29751 main.go:141] libmachine: (ha-900414-m03) Setting up store path in /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03 ...
	I0729 17:18:13.919939   29751 main.go:141] libmachine: (ha-900414-m03) Building disk image from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 17:18:13.920041   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:13.919918   30991 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:18:13.920084   29751 main.go:141] libmachine: (ha-900414-m03) Downloading /home/jenkins/minikube-integration/19345-11206/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 17:18:14.156338   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:14.156236   30991 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa...
	I0729 17:18:14.216469   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:14.216360   30991 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/ha-900414-m03.rawdisk...
	I0729 17:18:14.216498   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Writing magic tar header
	I0729 17:18:14.216512   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Writing SSH key tar header
	I0729 17:18:14.216586   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:14.216530   30991 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03 ...
	I0729 17:18:14.216703   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03
	I0729 17:18:14.216731   29751 main.go:141] libmachine: (ha-900414-m03) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03 (perms=drwx------)
	I0729 17:18:14.216743   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines
	I0729 17:18:14.216757   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:18:14.216767   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206
	I0729 17:18:14.216780   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 17:18:14.216788   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 17:18:14.216797   29751 main.go:141] libmachine: (ha-900414-m03) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines (perms=drwxr-xr-x)
	I0729 17:18:14.216810   29751 main.go:141] libmachine: (ha-900414-m03) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube (perms=drwxr-xr-x)
	I0729 17:18:14.216824   29751 main.go:141] libmachine: (ha-900414-m03) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206 (perms=drwxrwxr-x)
	I0729 17:18:14.216836   29751 main.go:141] libmachine: (ha-900414-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 17:18:14.216848   29751 main.go:141] libmachine: (ha-900414-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 17:18:14.216866   29751 main.go:141] libmachine: (ha-900414-m03) Creating domain...
	I0729 17:18:14.216887   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Checking permissions on dir: /home
	I0729 17:18:14.216916   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Skipping /home - not owner
	I0729 17:18:14.217981   29751 main.go:141] libmachine: (ha-900414-m03) define libvirt domain using xml: 
	I0729 17:18:14.218003   29751 main.go:141] libmachine: (ha-900414-m03) <domain type='kvm'>
	I0729 17:18:14.218019   29751 main.go:141] libmachine: (ha-900414-m03)   <name>ha-900414-m03</name>
	I0729 17:18:14.218027   29751 main.go:141] libmachine: (ha-900414-m03)   <memory unit='MiB'>2200</memory>
	I0729 17:18:14.218036   29751 main.go:141] libmachine: (ha-900414-m03)   <vcpu>2</vcpu>
	I0729 17:18:14.218043   29751 main.go:141] libmachine: (ha-900414-m03)   <features>
	I0729 17:18:14.218054   29751 main.go:141] libmachine: (ha-900414-m03)     <acpi/>
	I0729 17:18:14.218060   29751 main.go:141] libmachine: (ha-900414-m03)     <apic/>
	I0729 17:18:14.218068   29751 main.go:141] libmachine: (ha-900414-m03)     <pae/>
	I0729 17:18:14.218078   29751 main.go:141] libmachine: (ha-900414-m03)     
	I0729 17:18:14.218086   29751 main.go:141] libmachine: (ha-900414-m03)   </features>
	I0729 17:18:14.218094   29751 main.go:141] libmachine: (ha-900414-m03)   <cpu mode='host-passthrough'>
	I0729 17:18:14.218103   29751 main.go:141] libmachine: (ha-900414-m03)   
	I0729 17:18:14.218112   29751 main.go:141] libmachine: (ha-900414-m03)   </cpu>
	I0729 17:18:14.218122   29751 main.go:141] libmachine: (ha-900414-m03)   <os>
	I0729 17:18:14.218134   29751 main.go:141] libmachine: (ha-900414-m03)     <type>hvm</type>
	I0729 17:18:14.218144   29751 main.go:141] libmachine: (ha-900414-m03)     <boot dev='cdrom'/>
	I0729 17:18:14.218160   29751 main.go:141] libmachine: (ha-900414-m03)     <boot dev='hd'/>
	I0729 17:18:14.218170   29751 main.go:141] libmachine: (ha-900414-m03)     <bootmenu enable='no'/>
	I0729 17:18:14.218192   29751 main.go:141] libmachine: (ha-900414-m03)   </os>
	I0729 17:18:14.218223   29751 main.go:141] libmachine: (ha-900414-m03)   <devices>
	I0729 17:18:14.218243   29751 main.go:141] libmachine: (ha-900414-m03)     <disk type='file' device='cdrom'>
	I0729 17:18:14.218263   29751 main.go:141] libmachine: (ha-900414-m03)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/boot2docker.iso'/>
	I0729 17:18:14.218277   29751 main.go:141] libmachine: (ha-900414-m03)       <target dev='hdc' bus='scsi'/>
	I0729 17:18:14.218288   29751 main.go:141] libmachine: (ha-900414-m03)       <readonly/>
	I0729 17:18:14.218299   29751 main.go:141] libmachine: (ha-900414-m03)     </disk>
	I0729 17:18:14.218310   29751 main.go:141] libmachine: (ha-900414-m03)     <disk type='file' device='disk'>
	I0729 17:18:14.218325   29751 main.go:141] libmachine: (ha-900414-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 17:18:14.218340   29751 main.go:141] libmachine: (ha-900414-m03)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/ha-900414-m03.rawdisk'/>
	I0729 17:18:14.218354   29751 main.go:141] libmachine: (ha-900414-m03)       <target dev='hda' bus='virtio'/>
	I0729 17:18:14.218382   29751 main.go:141] libmachine: (ha-900414-m03)     </disk>
	I0729 17:18:14.218397   29751 main.go:141] libmachine: (ha-900414-m03)     <interface type='network'>
	I0729 17:18:14.218405   29751 main.go:141] libmachine: (ha-900414-m03)       <source network='mk-ha-900414'/>
	I0729 17:18:14.218418   29751 main.go:141] libmachine: (ha-900414-m03)       <model type='virtio'/>
	I0729 17:18:14.218426   29751 main.go:141] libmachine: (ha-900414-m03)     </interface>
	I0729 17:18:14.218435   29751 main.go:141] libmachine: (ha-900414-m03)     <interface type='network'>
	I0729 17:18:14.218449   29751 main.go:141] libmachine: (ha-900414-m03)       <source network='default'/>
	I0729 17:18:14.218463   29751 main.go:141] libmachine: (ha-900414-m03)       <model type='virtio'/>
	I0729 17:18:14.218476   29751 main.go:141] libmachine: (ha-900414-m03)     </interface>
	I0729 17:18:14.218487   29751 main.go:141] libmachine: (ha-900414-m03)     <serial type='pty'>
	I0729 17:18:14.218494   29751 main.go:141] libmachine: (ha-900414-m03)       <target port='0'/>
	I0729 17:18:14.218506   29751 main.go:141] libmachine: (ha-900414-m03)     </serial>
	I0729 17:18:14.218512   29751 main.go:141] libmachine: (ha-900414-m03)     <console type='pty'>
	I0729 17:18:14.218523   29751 main.go:141] libmachine: (ha-900414-m03)       <target type='serial' port='0'/>
	I0729 17:18:14.218531   29751 main.go:141] libmachine: (ha-900414-m03)     </console>
	I0729 17:18:14.218543   29751 main.go:141] libmachine: (ha-900414-m03)     <rng model='virtio'>
	I0729 17:18:14.218558   29751 main.go:141] libmachine: (ha-900414-m03)       <backend model='random'>/dev/random</backend>
	I0729 17:18:14.218566   29751 main.go:141] libmachine: (ha-900414-m03)     </rng>
	I0729 17:18:14.218575   29751 main.go:141] libmachine: (ha-900414-m03)     
	I0729 17:18:14.218582   29751 main.go:141] libmachine: (ha-900414-m03)     
	I0729 17:18:14.218593   29751 main.go:141] libmachine: (ha-900414-m03)   </devices>
	I0729 17:18:14.218604   29751 main.go:141] libmachine: (ha-900414-m03) </domain>
	I0729 17:18:14.218615   29751 main.go:141] libmachine: (ha-900414-m03) 
	I0729 17:18:14.225148   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:b1:6e:1c in network default
	I0729 17:18:14.225743   29751 main.go:141] libmachine: (ha-900414-m03) Ensuring networks are active...
	I0729 17:18:14.225762   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:14.226526   29751 main.go:141] libmachine: (ha-900414-m03) Ensuring network default is active
	I0729 17:18:14.226842   29751 main.go:141] libmachine: (ha-900414-m03) Ensuring network mk-ha-900414 is active
	I0729 17:18:14.227197   29751 main.go:141] libmachine: (ha-900414-m03) Getting domain xml...
	I0729 17:18:14.228032   29751 main.go:141] libmachine: (ha-900414-m03) Creating domain...
	I0729 17:18:15.454164   29751 main.go:141] libmachine: (ha-900414-m03) Waiting to get IP...
	I0729 17:18:15.455018   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:15.455501   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:15.455559   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:15.455499   30991 retry.go:31] will retry after 246.816517ms: waiting for machine to come up
	I0729 17:18:15.703907   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:15.704365   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:15.704392   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:15.704314   30991 retry.go:31] will retry after 245.373334ms: waiting for machine to come up
	I0729 17:18:15.951830   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:15.952257   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:15.952280   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:15.952232   30991 retry.go:31] will retry after 485.466801ms: waiting for machine to come up
	I0729 17:18:16.439601   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:16.440052   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:16.440079   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:16.440003   30991 retry.go:31] will retry after 473.462646ms: waiting for machine to come up
	I0729 17:18:16.914497   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:16.914866   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:16.914891   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:16.914828   30991 retry.go:31] will retry after 726.597775ms: waiting for machine to come up
	I0729 17:18:17.642694   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:17.643183   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:17.643212   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:17.643131   30991 retry.go:31] will retry after 629.97819ms: waiting for machine to come up
	I0729 17:18:18.274868   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:18.275362   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:18.275383   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:18.275319   30991 retry.go:31] will retry after 1.120227935s: waiting for machine to come up
	I0729 17:18:19.397310   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:19.397890   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:19.397915   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:19.397832   30991 retry.go:31] will retry after 1.410249374s: waiting for machine to come up
	I0729 17:18:20.810390   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:20.810770   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:20.810792   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:20.810719   30991 retry.go:31] will retry after 1.713663054s: waiting for machine to come up
	I0729 17:18:22.526050   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:22.526512   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:22.526539   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:22.526467   30991 retry.go:31] will retry after 1.966005335s: waiting for machine to come up
	I0729 17:18:24.494120   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:24.494550   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:24.494576   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:24.494501   30991 retry.go:31] will retry after 1.93915854s: waiting for machine to come up
	I0729 17:18:26.435501   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:26.435943   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:26.435970   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:26.435906   30991 retry.go:31] will retry after 3.220477941s: waiting for machine to come up
	I0729 17:18:29.658111   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:29.658606   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:29.658624   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:29.658599   30991 retry.go:31] will retry after 3.433937188s: waiting for machine to come up
	I0729 17:18:33.093711   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:33.094160   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find current IP address of domain ha-900414-m03 in network mk-ha-900414
	I0729 17:18:33.094187   29751 main.go:141] libmachine: (ha-900414-m03) DBG | I0729 17:18:33.094117   30991 retry.go:31] will retry after 5.222497284s: waiting for machine to come up
	I0729 17:18:38.319384   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.319856   29751 main.go:141] libmachine: (ha-900414-m03) Found IP for machine: 192.168.39.6
	I0729 17:18:38.319876   29751 main.go:141] libmachine: (ha-900414-m03) Reserving static IP address...
	I0729 17:18:38.319885   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has current primary IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.320272   29751 main.go:141] libmachine: (ha-900414-m03) DBG | unable to find host DHCP lease matching {name: "ha-900414-m03", mac: "52:54:00:df:ef:4e", ip: "192.168.39.6"} in network mk-ha-900414
	I0729 17:18:38.391778   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Getting to WaitForSSH function...
	I0729 17:18:38.391811   29751 main.go:141] libmachine: (ha-900414-m03) Reserved static IP address: 192.168.39.6
	I0729 17:18:38.391857   29751 main.go:141] libmachine: (ha-900414-m03) Waiting for SSH to be available...
	I0729 17:18:38.394804   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.395386   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:minikube Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:38.395424   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.395631   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Using SSH client type: external
	I0729 17:18:38.395650   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa (-rw-------)
	I0729 17:18:38.395684   29751 main.go:141] libmachine: (ha-900414-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:18:38.395694   29751 main.go:141] libmachine: (ha-900414-m03) DBG | About to run SSH command:
	I0729 17:18:38.395710   29751 main.go:141] libmachine: (ha-900414-m03) DBG | exit 0
	I0729 17:18:38.522593   29751 main.go:141] libmachine: (ha-900414-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 17:18:38.522888   29751 main.go:141] libmachine: (ha-900414-m03) KVM machine creation complete!
	I0729 17:18:38.523242   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetConfigRaw
	I0729 17:18:38.523691   29751 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:18:38.523865   29751 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:18:38.524008   29751 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 17:18:38.524022   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetState
	I0729 17:18:38.525265   29751 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 17:18:38.525279   29751 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 17:18:38.525296   29751 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 17:18:38.525305   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:38.527540   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.527956   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:38.527986   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.528120   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:38.528302   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:38.528441   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:38.528562   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:38.528701   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:18:38.528901   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0729 17:18:38.528912   29751 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 17:18:38.645896   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:18:38.645919   29751 main.go:141] libmachine: Detecting the provisioner...
	I0729 17:18:38.645927   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:38.648526   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.648855   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:38.648896   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.649028   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:38.649220   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:38.649396   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:38.649515   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:38.649636   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:18:38.649784   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0729 17:18:38.649793   29751 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 17:18:38.755214   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 17:18:38.755271   29751 main.go:141] libmachine: found compatible host: buildroot
	I0729 17:18:38.755278   29751 main.go:141] libmachine: Provisioning with buildroot...
	I0729 17:18:38.755285   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetMachineName
	I0729 17:18:38.755503   29751 buildroot.go:166] provisioning hostname "ha-900414-m03"
	I0729 17:18:38.755531   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetMachineName
	I0729 17:18:38.755718   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:38.758316   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.758703   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:38.758733   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.758836   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:38.758985   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:38.759144   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:38.759277   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:38.759433   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:18:38.759575   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0729 17:18:38.759586   29751 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-900414-m03 && echo "ha-900414-m03" | sudo tee /etc/hostname
	I0729 17:18:38.882446   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-900414-m03
	
	I0729 17:18:38.882477   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:38.885220   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.885619   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:38.885644   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:38.885838   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:38.886006   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:38.886159   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:38.886286   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:38.886465   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:18:38.886616   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0729 17:18:38.886632   29751 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-900414-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-900414-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-900414-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:18:39.005370   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:18:39.005402   29751 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 17:18:39.005421   29751 buildroot.go:174] setting up certificates
	I0729 17:18:39.005431   29751 provision.go:84] configureAuth start
	I0729 17:18:39.005447   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetMachineName
	I0729 17:18:39.005732   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:18:39.008861   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.009335   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.009365   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.009509   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:39.011833   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.012220   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.012251   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.012398   29751 provision.go:143] copyHostCerts
	I0729 17:18:39.012428   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:18:39.012475   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 17:18:39.012486   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:18:39.012572   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 17:18:39.012674   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:18:39.012700   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 17:18:39.012709   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:18:39.012739   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 17:18:39.012792   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:18:39.012814   29751 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 17:18:39.012822   29751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:18:39.012858   29751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 17:18:39.012928   29751 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.ha-900414-m03 san=[127.0.0.1 192.168.39.6 ha-900414-m03 localhost minikube]
	I0729 17:18:39.065377   29751 provision.go:177] copyRemoteCerts
	I0729 17:18:39.065440   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:18:39.065468   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:39.068586   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.068997   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.069018   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.069216   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:39.069424   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:39.069575   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:39.069708   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:18:39.156161   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:18:39.156219   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:18:39.180706   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:18:39.180791   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 17:18:39.206593   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:18:39.206659   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 17:18:39.233410   29751 provision.go:87] duration metric: took 227.965466ms to configureAuth
	I0729 17:18:39.233435   29751 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:18:39.233657   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:18:39.233753   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:39.236299   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.236654   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.236682   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.236826   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:39.237048   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:39.237234   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:39.237392   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:39.237560   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:18:39.237724   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0729 17:18:39.237737   29751 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:18:39.507521   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:18:39.507556   29751 main.go:141] libmachine: Checking connection to Docker...
	I0729 17:18:39.507566   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetURL
	I0729 17:18:39.508893   29751 main.go:141] libmachine: (ha-900414-m03) DBG | Using libvirt version 6000000
	I0729 17:18:39.511229   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.511654   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.511673   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.511853   29751 main.go:141] libmachine: Docker is up and running!
	I0729 17:18:39.511866   29751 main.go:141] libmachine: Reticulating splines...
	I0729 17:18:39.511874   29751 client.go:171] duration metric: took 25.594531169s to LocalClient.Create
	I0729 17:18:39.511916   29751 start.go:167] duration metric: took 25.594604458s to libmachine.API.Create "ha-900414"
	I0729 17:18:39.511927   29751 start.go:293] postStartSetup for "ha-900414-m03" (driver="kvm2")
	I0729 17:18:39.511935   29751 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:18:39.511950   29751 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:18:39.512166   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:18:39.512189   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:39.514637   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.514981   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.515004   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.515082   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:39.515268   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:39.515394   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:39.515512   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:18:39.600547   29751 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:18:39.604970   29751 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:18:39.604998   29751 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 17:18:39.605058   29751 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 17:18:39.605127   29751 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 17:18:39.605136   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /etc/ssl/certs/183932.pem
	I0729 17:18:39.605218   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:18:39.614337   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:18:39.639317   29751 start.go:296] duration metric: took 127.361162ms for postStartSetup
	I0729 17:18:39.639378   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetConfigRaw
	I0729 17:18:39.640029   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:18:39.642790   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.643146   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.643181   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.643470   29751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:18:39.643786   29751 start.go:128] duration metric: took 25.745185719s to createHost
	I0729 17:18:39.643812   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:39.646065   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.646471   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.646490   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.646764   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:39.646928   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:39.647019   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:39.647184   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:39.647361   29751 main.go:141] libmachine: Using SSH client type: native
	I0729 17:18:39.647546   29751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0729 17:18:39.647560   29751 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:18:39.755200   29751 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722273519.732548551
	
	I0729 17:18:39.755229   29751 fix.go:216] guest clock: 1722273519.732548551
	I0729 17:18:39.755235   29751 fix.go:229] Guest: 2024-07-29 17:18:39.732548551 +0000 UTC Remote: 2024-07-29 17:18:39.643800136 +0000 UTC m=+160.000060021 (delta=88.748415ms)
	I0729 17:18:39.755253   29751 fix.go:200] guest clock delta is within tolerance: 88.748415ms
	I0729 17:18:39.755258   29751 start.go:83] releasing machines lock for "ha-900414-m03", held for 25.856762836s
	I0729 17:18:39.755277   29751 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:18:39.755513   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:18:39.758889   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.759556   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.759585   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.761549   29751 out.go:177] * Found network options:
	I0729 17:18:39.762906   29751 out.go:177]   - NO_PROXY=192.168.39.114,192.168.39.111
	W0729 17:18:39.764111   29751 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 17:18:39.764131   29751 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 17:18:39.764158   29751 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:18:39.764706   29751 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:18:39.764888   29751 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:18:39.764989   29751 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:18:39.765028   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	W0729 17:18:39.765084   29751 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 17:18:39.765101   29751 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 17:18:39.765157   29751 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:18:39.765171   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:18:39.767982   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.768326   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.768368   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.768394   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.768541   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:39.768714   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:39.768809   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:39.768827   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:39.768901   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:39.768974   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:18:39.769048   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:18:39.769119   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:18:39.769248   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:18:39.769396   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:18:40.008382   29751 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:18:40.015691   29751 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:18:40.015761   29751 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:18:40.032671   29751 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 17:18:40.032692   29751 start.go:495] detecting cgroup driver to use...
	I0729 17:18:40.032762   29751 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:18:40.050414   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:18:40.066938   29751 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:18:40.066991   29751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:18:40.081494   29751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:18:40.095961   29751 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:18:40.222640   29751 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:18:40.360965   29751 docker.go:233] disabling docker service ...
	I0729 17:18:40.361045   29751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:18:40.375633   29751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:18:40.388273   29751 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:18:40.532840   29751 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:18:40.676072   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:18:40.689785   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:18:40.709089   29751 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:18:40.709150   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:18:40.719494   29751 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:18:40.719560   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:18:40.730041   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:18:40.740211   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:18:40.750185   29751 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:18:40.760826   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:18:40.771677   29751 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:18:40.788399   29751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:18:40.798349   29751 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:18:40.807516   29751 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 17:18:40.807575   29751 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 17:18:40.821127   29751 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:18:40.830609   29751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:18:40.946720   29751 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:18:41.086008   29751 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:18:41.086073   29751 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:18:41.090668   29751 start.go:563] Will wait 60s for crictl version
	I0729 17:18:41.090720   29751 ssh_runner.go:195] Run: which crictl
	I0729 17:18:41.094290   29751 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:18:41.141366   29751 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:18:41.141452   29751 ssh_runner.go:195] Run: crio --version
	I0729 17:18:41.169254   29751 ssh_runner.go:195] Run: crio --version
	I0729 17:18:41.198516   29751 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:18:41.199921   29751 out.go:177]   - env NO_PROXY=192.168.39.114
	I0729 17:18:41.201078   29751 out.go:177]   - env NO_PROXY=192.168.39.114,192.168.39.111
	I0729 17:18:41.202122   29751 main.go:141] libmachine: (ha-900414-m03) Calling .GetIP
	I0729 17:18:41.204737   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:41.205120   29751 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:18:41.205146   29751 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:18:41.205306   29751 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:18:41.209344   29751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:18:41.221916   29751 mustload.go:65] Loading cluster: ha-900414
	I0729 17:18:41.222106   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:18:41.222385   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:18:41.222430   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:18:41.237599   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32921
	I0729 17:18:41.238105   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:18:41.238665   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:18:41.238682   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:18:41.238992   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:18:41.239156   29751 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:18:41.240467   29751 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:18:41.240786   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:18:41.240824   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:18:41.254764   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42031
	I0729 17:18:41.255100   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:18:41.255454   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:18:41.255468   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:18:41.255732   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:18:41.255891   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:18:41.256046   29751 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414 for IP: 192.168.39.6
	I0729 17:18:41.256059   29751 certs.go:194] generating shared ca certs ...
	I0729 17:18:41.256075   29751 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:18:41.256213   29751 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 17:18:41.256263   29751 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 17:18:41.256279   29751 certs.go:256] generating profile certs ...
	I0729 17:18:41.256375   29751 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key
	I0729 17:18:41.256425   29751 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.826d1828
	I0729 17:18:41.256446   29751 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.826d1828 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.111 192.168.39.6 192.168.39.254]
	I0729 17:18:41.489384   29751 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.826d1828 ...
	I0729 17:18:41.489413   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.826d1828: {Name:mk943bd45e2a4e4e4c4affd69e2cd693563da4e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:18:41.489592   29751 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.826d1828 ...
	I0729 17:18:41.489611   29751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.826d1828: {Name:mk5e48b8f3e65218b7961a6917dda810634f838b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:18:41.489706   29751 certs.go:381] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.826d1828 -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt
	I0729 17:18:41.489844   29751 certs.go:385] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.826d1828 -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key
	I0729 17:18:41.489962   29751 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key
	I0729 17:18:41.489975   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 17:18:41.489987   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 17:18:41.490000   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 17:18:41.490012   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 17:18:41.490031   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 17:18:41.490053   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 17:18:41.490098   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 17:18:41.490118   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 17:18:41.490179   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 17:18:41.490206   29751 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 17:18:41.490215   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:18:41.490235   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:18:41.490259   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:18:41.490282   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 17:18:41.490318   29751 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:18:41.490342   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:18:41.490357   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem -> /usr/share/ca-certificates/18393.pem
	I0729 17:18:41.490393   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /usr/share/ca-certificates/183932.pem
	I0729 17:18:41.490437   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:18:41.493337   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:18:41.493728   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:18:41.493768   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:18:41.493929   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:18:41.494112   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:18:41.494260   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:18:41.494391   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:18:41.574702   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 17:18:41.580327   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 17:18:41.594027   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 17:18:41.599399   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0729 17:18:41.611102   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 17:18:41.615269   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 17:18:41.627335   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 17:18:41.631697   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0729 17:18:41.643941   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 17:18:41.648561   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 17:18:41.659576   29751 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 17:18:41.664123   29751 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 17:18:41.675373   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:18:41.701044   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:18:41.725127   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:18:41.749289   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:18:41.780486   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 17:18:41.804847   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 17:18:41.830878   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:18:41.856997   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:18:41.885612   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:18:41.911375   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 17:18:41.935508   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 17:18:41.962258   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 17:18:41.978918   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0729 17:18:41.995475   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 17:18:42.015357   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0729 17:18:42.033472   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 17:18:42.051159   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 17:18:42.068362   29751 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 17:18:42.086927   29751 ssh_runner.go:195] Run: openssl version
	I0729 17:18:42.092826   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 17:18:42.103736   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 17:18:42.108386   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 17:18:42.108437   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 17:18:42.114415   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:18:42.125122   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:18:42.135738   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:18:42.140371   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:18:42.140416   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:18:42.146023   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:18:42.157101   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 17:18:42.168377   29751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 17:18:42.173027   29751 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 17:18:42.173078   29751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 17:18:42.178674   29751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 17:18:42.189395   29751 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:18:42.193772   29751 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 17:18:42.193828   29751 kubeadm.go:934] updating node {m03 192.168.39.6 8443 v1.30.3 crio true true} ...
	I0729 17:18:42.193936   29751 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-900414-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:18:42.193970   29751 kube-vip.go:115] generating kube-vip config ...
	I0729 17:18:42.194012   29751 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 17:18:42.212288   29751 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 17:18:42.212355   29751 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 17:18:42.212405   29751 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:18:42.223812   29751 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 17:18:42.223874   29751 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 17:18:42.233769   29751 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 17:18:42.233784   29751 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 17:18:42.233793   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 17:18:42.233813   29751 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 17:18:42.233825   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:18:42.233828   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 17:18:42.233880   29751 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 17:18:42.233899   29751 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 17:18:42.252670   29751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 17:18:42.252700   29751 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 17:18:42.252728   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 17:18:42.252765   29751 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 17:18:42.252820   29751 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 17:18:42.252845   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 17:18:42.278610   29751 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 17:18:42.278655   29751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 17:18:43.143466   29751 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 17:18:43.153090   29751 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0729 17:18:43.169229   29751 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:18:43.185296   29751 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 17:18:43.201501   29751 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 17:18:43.205346   29751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:18:43.217546   29751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:18:43.348207   29751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:18:43.365606   29751 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:18:43.366044   29751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:18:43.366091   29751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:18:43.383436   29751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37583
	I0729 17:18:43.383836   29751 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:18:43.384341   29751 main.go:141] libmachine: Using API Version  1
	I0729 17:18:43.384365   29751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:18:43.384732   29751 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:18:43.384911   29751 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:18:43.385078   29751 start.go:317] joinCluster: &{Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:18:43.385216   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 17:18:43.385236   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:18:43.387931   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:18:43.388317   29751 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:18:43.388347   29751 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:18:43.388514   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:18:43.388672   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:18:43.388844   29751 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:18:43.388972   29751 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:18:43.548828   29751 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:18:43.548874   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cgee31.5r7eghabux47j74p --discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-900414-m03 --control-plane --apiserver-advertise-address=192.168.39.6 --apiserver-bind-port=8443"
	I0729 17:19:06.594553   29751 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cgee31.5r7eghabux47j74p --discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-900414-m03 --control-plane --apiserver-advertise-address=192.168.39.6 --apiserver-bind-port=8443": (23.045646537s)
	I0729 17:19:06.594588   29751 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 17:19:07.239944   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-900414-m03 minikube.k8s.io/updated_at=2024_07_29T17_19_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=ha-900414 minikube.k8s.io/primary=false
	I0729 17:19:07.352742   29751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-900414-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 17:19:07.472219   29751 start.go:319] duration metric: took 24.087139049s to joinCluster
	I0729 17:19:07.472317   29751 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:19:07.472621   29751 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:19:07.473775   29751 out.go:177] * Verifying Kubernetes components...
	I0729 17:19:07.475144   29751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:19:07.793820   29751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:19:07.849414   29751 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 17:19:07.849677   29751 kapi.go:59] client config for ha-900414: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.crt", KeyFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key", CAFile:"/home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 17:19:07.849744   29751 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.114:8443
	I0729 17:19:07.849994   29751 node_ready.go:35] waiting up to 6m0s for node "ha-900414-m03" to be "Ready" ...
	I0729 17:19:07.850113   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:07.850123   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:07.850135   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:07.850141   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:07.853663   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:08.350487   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:08.350512   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:08.350524   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:08.350529   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:08.354556   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:08.850616   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:08.850636   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:08.850645   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:08.850648   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:08.854661   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:09.350522   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:09.350604   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:09.350618   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:09.350623   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:09.356116   29751 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:19:09.850579   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:09.850599   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:09.850607   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:09.850610   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:09.854987   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:09.855646   29751 node_ready.go:53] node "ha-900414-m03" has status "Ready":"False"
	I0729 17:19:10.350985   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:10.351007   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:10.351019   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:10.351025   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:10.354678   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:10.850514   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:10.850533   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:10.850541   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:10.850545   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:10.854110   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:11.351209   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:11.351249   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:11.351266   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:11.351271   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:11.354932   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:11.850823   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:11.850844   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:11.850852   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:11.850858   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:11.854984   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:12.350958   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:12.350989   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:12.351000   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:12.351007   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:12.354415   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:12.355130   29751 node_ready.go:53] node "ha-900414-m03" has status "Ready":"False"
	I0729 17:19:12.850262   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:12.850281   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:12.850289   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:12.850294   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:12.853509   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:13.350496   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:13.350516   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:13.350524   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:13.350529   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:13.353935   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:13.850728   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:13.850751   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:13.850759   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:13.850764   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:13.854749   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:14.350461   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:14.350480   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:14.350490   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:14.350494   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:14.354803   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:14.355421   29751 node_ready.go:53] node "ha-900414-m03" has status "Ready":"False"
	I0729 17:19:14.850187   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:14.850221   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:14.850231   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:14.850237   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:14.853494   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:15.350506   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:15.350526   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:15.350534   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:15.350541   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:15.353679   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:15.850900   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:15.850925   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:15.850935   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:15.850943   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:15.853670   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:16.351143   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:16.351165   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:16.351176   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:16.351181   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:16.354410   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:16.850396   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:16.850416   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:16.850425   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:16.850428   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:16.853683   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:16.854325   29751 node_ready.go:53] node "ha-900414-m03" has status "Ready":"False"
	I0729 17:19:17.350639   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:17.350658   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:17.350667   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:17.350672   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:17.354867   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:17.851013   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:17.851037   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:17.851049   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:17.851053   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:17.854718   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:18.350501   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:18.350520   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:18.350536   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:18.350541   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:18.353940   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:18.851009   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:18.851033   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:18.851045   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:18.851050   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:18.854509   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:18.855161   29751 node_ready.go:53] node "ha-900414-m03" has status "Ready":"False"
	I0729 17:19:19.350465   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:19.350483   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:19.350491   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:19.350495   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:19.353618   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:19.850345   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:19.850381   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:19.850393   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:19.850400   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:19.853469   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:20.351156   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:20.351182   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:20.351192   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:20.351199   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:20.355240   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:20.850380   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:20.850403   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:20.850411   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:20.850415   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:20.854135   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:21.351083   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:21.351108   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.351119   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.351124   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.355081   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:21.355742   29751 node_ready.go:53] node "ha-900414-m03" has status "Ready":"False"
	I0729 17:19:21.851203   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:21.851224   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.851231   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.851236   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.854194   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:21.854912   29751 node_ready.go:49] node "ha-900414-m03" has status "Ready":"True"
	I0729 17:19:21.854974   29751 node_ready.go:38] duration metric: took 14.004961019s for node "ha-900414-m03" to be "Ready" ...
	I0729 17:19:21.854990   29751 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:19:21.855079   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:19:21.855091   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.855102   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.855119   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.863482   29751 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 17:19:21.870015   29751 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-48j6w" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:21.870082   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-48j6w
	I0729 17:19:21.870090   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.870097   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.870101   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.873025   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:21.874008   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:21.874024   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.874030   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.874035   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.877376   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:21.878242   29751 pod_ready.go:92] pod "coredns-7db6d8ff4d-48j6w" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:21.878257   29751 pod_ready.go:81] duration metric: took 8.220998ms for pod "coredns-7db6d8ff4d-48j6w" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:21.878264   29751 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9r87x" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:21.878306   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9r87x
	I0729 17:19:21.878313   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.878320   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.878324   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.881497   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:21.882699   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:21.882712   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.882718   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.882721   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.885515   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:21.886266   29751 pod_ready.go:92] pod "coredns-7db6d8ff4d-9r87x" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:21.886281   29751 pod_ready.go:81] duration metric: took 8.011311ms for pod "coredns-7db6d8ff4d-9r87x" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:21.886288   29751 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:21.886328   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-900414
	I0729 17:19:21.886335   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.886342   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.886347   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.888538   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:21.888993   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:21.889005   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.889012   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.889016   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.891453   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:21.892242   29751 pod_ready.go:92] pod "etcd-ha-900414" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:21.892263   29751 pod_ready.go:81] duration metric: took 5.969237ms for pod "etcd-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:21.892285   29751 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:21.892339   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-900414-m02
	I0729 17:19:21.892348   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.892355   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.892359   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.895297   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:21.895969   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:21.895985   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:21.895995   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:21.896000   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:21.898796   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:21.899611   29751 pod_ready.go:92] pod "etcd-ha-900414-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:21.899625   29751 pod_ready.go:81] duration metric: took 7.333134ms for pod "etcd-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:21.899632   29751 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-900414-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:22.052019   29751 request.go:629] Waited for 152.334115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-900414-m03
	I0729 17:19:22.052078   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-900414-m03
	I0729 17:19:22.052094   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:22.052104   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:22.052108   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:22.055335   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:22.251244   29751 request.go:629] Waited for 195.251841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:22.251313   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:22.251324   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:22.251335   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:22.251345   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:22.255297   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:22.256267   29751 pod_ready.go:92] pod "etcd-ha-900414-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:22.256289   29751 pod_ready.go:81] duration metric: took 356.650571ms for pod "etcd-ha-900414-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:22.256312   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:22.451250   29751 request.go:629] Waited for 194.873541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414
	I0729 17:19:22.451372   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414
	I0729 17:19:22.451388   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:22.451398   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:22.451403   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:22.455235   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:22.652048   29751 request.go:629] Waited for 196.262816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:22.652095   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:22.652100   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:22.652114   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:22.652120   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:22.655573   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:22.656081   29751 pod_ready.go:92] pod "kube-apiserver-ha-900414" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:22.656100   29751 pod_ready.go:81] duration metric: took 399.776412ms for pod "kube-apiserver-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:22.656112   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:22.852262   29751 request.go:629] Waited for 196.068275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414-m02
	I0729 17:19:22.852321   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414-m02
	I0729 17:19:22.852328   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:22.852335   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:22.852341   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:22.855970   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:23.052127   29751 request.go:629] Waited for 195.362656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:23.052208   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:23.052216   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:23.052227   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:23.052235   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:23.055712   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:23.056126   29751 pod_ready.go:92] pod "kube-apiserver-ha-900414-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:23.056140   29751 pod_ready.go:81] duration metric: took 400.012328ms for pod "kube-apiserver-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:23.056149   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-900414-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:23.252272   29751 request.go:629] Waited for 196.048736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414-m03
	I0729 17:19:23.252349   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-900414-m03
	I0729 17:19:23.252355   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:23.252362   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:23.252367   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:23.256080   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:23.451956   29751 request.go:629] Waited for 195.269252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:23.452073   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:23.452085   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:23.452096   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:23.452108   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:23.457021   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:23.457679   29751 pod_ready.go:92] pod "kube-apiserver-ha-900414-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:23.457696   29751 pod_ready.go:81] duration metric: took 401.53635ms for pod "kube-apiserver-ha-900414-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:23.457715   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:23.651395   29751 request.go:629] Waited for 193.614796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414
	I0729 17:19:23.651468   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414
	I0729 17:19:23.651475   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:23.651484   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:23.651490   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:23.655606   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:23.851523   29751 request.go:629] Waited for 195.363742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:23.851571   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:23.851576   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:23.851585   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:23.851588   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:23.855252   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:23.855761   29751 pod_ready.go:92] pod "kube-controller-manager-ha-900414" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:23.855785   29751 pod_ready.go:81] duration metric: took 398.06379ms for pod "kube-controller-manager-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:23.855795   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:24.051238   29751 request.go:629] Waited for 195.386711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414-m02
	I0729 17:19:24.051305   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414-m02
	I0729 17:19:24.051311   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:24.051319   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:24.051324   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:24.054653   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:24.251795   29751 request.go:629] Waited for 196.363963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:24.251840   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:24.251847   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:24.251854   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:24.251860   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:24.255941   29751 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:19:24.256531   29751 pod_ready.go:92] pod "kube-controller-manager-ha-900414-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:24.256552   29751 pod_ready.go:81] duration metric: took 400.750428ms for pod "kube-controller-manager-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:24.256562   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-900414-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:24.451894   29751 request.go:629] Waited for 195.26591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414-m03
	I0729 17:19:24.451968   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-900414-m03
	I0729 17:19:24.451979   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:24.451993   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:24.452004   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:24.455670   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:24.651694   29751 request.go:629] Waited for 195.361663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:24.651747   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:24.651754   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:24.651764   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:24.651773   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:24.654780   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:24.655240   29751 pod_ready.go:92] pod "kube-controller-manager-ha-900414-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:24.655304   29751 pod_ready.go:81] duration metric: took 398.730533ms for pod "kube-controller-manager-ha-900414-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:24.655323   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bgq99" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:24.851560   29751 request.go:629] Waited for 196.160756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bgq99
	I0729 17:19:24.851637   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bgq99
	I0729 17:19:24.851645   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:24.851654   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:24.851662   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:24.855588   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:25.051529   29751 request.go:629] Waited for 195.171844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:25.051604   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:25.051616   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:25.051627   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:25.051641   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:25.054958   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:25.055413   29751 pod_ready.go:92] pod "kube-proxy-bgq99" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:25.055431   29751 pod_ready.go:81] duration metric: took 400.102063ms for pod "kube-proxy-bgq99" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:25.055442   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tng4t" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:25.251534   29751 request.go:629] Waited for 196.01631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tng4t
	I0729 17:19:25.251602   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tng4t
	I0729 17:19:25.251607   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:25.251615   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:25.251619   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:25.254889   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:25.452044   29751 request.go:629] Waited for 196.352608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:25.452102   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:25.452158   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:25.452172   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:25.452182   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:25.455565   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:25.456093   29751 pod_ready.go:92] pod "kube-proxy-tng4t" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:25.456116   29751 pod_ready.go:81] duration metric: took 400.661421ms for pod "kube-proxy-tng4t" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:25.456125   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wnfsb" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:25.652157   29751 request.go:629] Waited for 195.96246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wnfsb
	I0729 17:19:25.652245   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wnfsb
	I0729 17:19:25.652256   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:25.652267   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:25.652276   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:25.655725   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:25.851517   29751 request.go:629] Waited for 195.149449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:25.851595   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:25.851606   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:25.851618   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:25.851628   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:25.854422   29751 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:19:25.855240   29751 pod_ready.go:92] pod "kube-proxy-wnfsb" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:25.855262   29751 pod_ready.go:81] duration metric: took 399.130576ms for pod "kube-proxy-wnfsb" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:25.855275   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:26.051220   29751 request.go:629] Waited for 195.864245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414
	I0729 17:19:26.051288   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414
	I0729 17:19:26.051293   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:26.051302   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:26.051313   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:26.054646   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:26.251646   29751 request.go:629] Waited for 196.397199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:26.251718   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414
	I0729 17:19:26.251723   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:26.251732   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:26.251739   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:26.255079   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:26.255799   29751 pod_ready.go:92] pod "kube-scheduler-ha-900414" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:26.255826   29751 pod_ready.go:81] duration metric: took 400.542457ms for pod "kube-scheduler-ha-900414" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:26.255841   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:26.452159   29751 request.go:629] Waited for 196.243797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414-m02
	I0729 17:19:26.452214   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414-m02
	I0729 17:19:26.452219   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:26.452227   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:26.452232   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:26.456010   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:26.651980   29751 request.go:629] Waited for 195.349194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:26.652042   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m02
	I0729 17:19:26.652050   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:26.652058   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:26.652061   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:26.655151   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:26.655867   29751 pod_ready.go:92] pod "kube-scheduler-ha-900414-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:26.655886   29751 pod_ready.go:81] duration metric: took 400.036515ms for pod "kube-scheduler-ha-900414-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:26.655895   29751 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-900414-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:26.851900   29751 request.go:629] Waited for 195.949978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414-m03
	I0729 17:19:26.851987   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-900414-m03
	I0729 17:19:26.851998   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:26.852010   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:26.852019   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:26.855542   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:27.051535   29751 request.go:629] Waited for 195.337415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:27.051620   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-900414-m03
	I0729 17:19:27.051626   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:27.051636   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:27.051642   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:27.055487   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:27.056183   29751 pod_ready.go:92] pod "kube-scheduler-ha-900414-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 17:19:27.056199   29751 pod_ready.go:81] duration metric: took 400.299217ms for pod "kube-scheduler-ha-900414-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:19:27.056210   29751 pod_ready.go:38] duration metric: took 5.201207309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:19:27.056225   29751 api_server.go:52] waiting for apiserver process to appear ...
	I0729 17:19:27.056269   29751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:19:27.073728   29751 api_server.go:72] duration metric: took 19.601372284s to wait for apiserver process to appear ...
	I0729 17:19:27.073746   29751 api_server.go:88] waiting for apiserver healthz status ...
	I0729 17:19:27.073763   29751 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I0729 17:19:27.077897   29751 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I0729 17:19:27.077950   29751 round_trippers.go:463] GET https://192.168.39.114:8443/version
	I0729 17:19:27.077957   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:27.077966   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:27.077972   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:27.078823   29751 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 17:19:27.078882   29751 api_server.go:141] control plane version: v1.30.3
	I0729 17:19:27.078899   29751 api_server.go:131] duration metric: took 5.145715ms to wait for apiserver health ...
	I0729 17:19:27.078908   29751 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 17:19:27.251235   29751 request.go:629] Waited for 172.262934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:19:27.251282   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:19:27.251287   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:27.251293   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:27.251297   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:27.258319   29751 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 17:19:27.264831   29751 system_pods.go:59] 24 kube-system pods found
	I0729 17:19:27.264865   29751 system_pods.go:61] "coredns-7db6d8ff4d-48j6w" [306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d] Running
	I0729 17:19:27.264874   29751 system_pods.go:61] "coredns-7db6d8ff4d-9r87x" [fcc4709f-f07b-4694-a352-aedd9c67bbb2] Running
	I0729 17:19:27.264880   29751 system_pods.go:61] "etcd-ha-900414" [96243a16-1b51-4136-bc25-f3a0da2f7500] Running
	I0729 17:19:27.264886   29751 system_pods.go:61] "etcd-ha-900414-m02" [29c61208-cebd-4d6b-addf-426efcc78899] Running
	I0729 17:19:27.264891   29751 system_pods.go:61] "etcd-ha-900414-m03" [67d8b9ed-d401-4de2-9ef8-c8295c488e29] Running
	I0729 17:19:27.264898   29751 system_pods.go:61] "kindnet-6vzd2" [396c742c-f9b6-4184-84db-7407ba419a86] Running
	I0729 17:19:27.264910   29751 system_pods.go:61] "kindnet-kdzhk" [d86b52ee-7d4c-4530-afa1-88cf8ad77379] Running
	I0729 17:19:27.264915   29751 system_pods.go:61] "kindnet-z9cvz" [c2177daa-4efb-478c-845f-f30e77e91684] Running
	I0729 17:19:27.264919   29751 system_pods.go:61] "kube-apiserver-ha-900414" [2a4045e8-a900-4ebd-b36e-95083ab251c9] Running
	I0729 17:19:27.264924   29751 system_pods.go:61] "kube-apiserver-ha-900414-m02" [28c2e5cf-876b-4b77-b9c7-406642dc4df6] Running
	I0729 17:19:27.264930   29751 system_pods.go:61] "kube-apiserver-ha-900414-m03" [2d5328fb-f6d2-4efc-ab72-0395e6500f21] Running
	I0729 17:19:27.264934   29751 system_pods.go:61] "kube-controller-manager-ha-900414" [62bb9ded-db08-49a0-aea4-8806d0e8d294] Running
	I0729 17:19:27.264939   29751 system_pods.go:61] "kube-controller-manager-ha-900414-m02" [88418c96-4611-4276-91c6-ae9b67d4ae74] Running
	I0729 17:19:27.264943   29751 system_pods.go:61] "kube-controller-manager-ha-900414-m03" [f8b5466c-1783-4f30-b3d1-f5034f7f52af] Running
	I0729 17:19:27.264948   29751 system_pods.go:61] "kube-proxy-bgq99" [0258cc44-f6ff-4294-a621-61b172247e15] Running
	I0729 17:19:27.264952   29751 system_pods.go:61] "kube-proxy-tng4t" [2303269f-50d3-4a63-aa76-891f001e6f5d] Running
	I0729 17:19:27.264957   29751 system_pods.go:61] "kube-proxy-wnfsb" [0322d88f-c31b-4cc7-b073-2f97ab9e047a] Running
	I0729 17:19:27.264963   29751 system_pods.go:61] "kube-scheduler-ha-900414" [3d41b818-c8ad-4dbb-bc7b-73f578d33539] Running
	I0729 17:19:27.264971   29751 system_pods.go:61] "kube-scheduler-ha-900414-m02" [f9cc318d-be18-4858-9712-b92f11027b65] Running
	I0729 17:19:27.264977   29751 system_pods.go:61] "kube-scheduler-ha-900414-m03" [7787c02c-b8dc-435f-9e58-52108a528291] Running
	I0729 17:19:27.264984   29751 system_pods.go:61] "kube-vip-ha-900414" [bf3918b4-6cc5-499b-808e-b6c33138cae2] Running
	I0729 17:19:27.264989   29751 system_pods.go:61] "kube-vip-ha-900414-m02" [9fad8ffb-6d3c-44ba-9700-e0e4d70a5f71] Running
	I0729 17:19:27.264993   29751 system_pods.go:61] "kube-vip-ha-900414-m03" [78c34b31-b4d7-4311-9c22-32a2f8fdd948] Running
	I0729 17:19:27.265002   29751 system_pods.go:61] "storage-provisioner" [50fa96e8-1ee5-4e09-a734-802dbcd02bcc] Running
	I0729 17:19:27.265012   29751 system_pods.go:74] duration metric: took 186.095195ms to wait for pod list to return data ...
	I0729 17:19:27.265024   29751 default_sa.go:34] waiting for default service account to be created ...
	I0729 17:19:27.451349   29751 request.go:629] Waited for 186.261672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I0729 17:19:27.451413   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I0729 17:19:27.451419   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:27.451426   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:27.451438   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:27.454883   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:27.455036   29751 default_sa.go:45] found service account: "default"
	I0729 17:19:27.455055   29751 default_sa.go:55] duration metric: took 190.02141ms for default service account to be created ...
	I0729 17:19:27.455066   29751 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 17:19:27.651345   29751 request.go:629] Waited for 196.211598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:19:27.651413   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I0729 17:19:27.651421   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:27.651444   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:27.651454   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:27.658183   29751 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 17:19:27.664639   29751 system_pods.go:86] 24 kube-system pods found
	I0729 17:19:27.664662   29751 system_pods.go:89] "coredns-7db6d8ff4d-48j6w" [306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d] Running
	I0729 17:19:27.664668   29751 system_pods.go:89] "coredns-7db6d8ff4d-9r87x" [fcc4709f-f07b-4694-a352-aedd9c67bbb2] Running
	I0729 17:19:27.664673   29751 system_pods.go:89] "etcd-ha-900414" [96243a16-1b51-4136-bc25-f3a0da2f7500] Running
	I0729 17:19:27.664677   29751 system_pods.go:89] "etcd-ha-900414-m02" [29c61208-cebd-4d6b-addf-426efcc78899] Running
	I0729 17:19:27.664681   29751 system_pods.go:89] "etcd-ha-900414-m03" [67d8b9ed-d401-4de2-9ef8-c8295c488e29] Running
	I0729 17:19:27.664684   29751 system_pods.go:89] "kindnet-6vzd2" [396c742c-f9b6-4184-84db-7407ba419a86] Running
	I0729 17:19:27.664688   29751 system_pods.go:89] "kindnet-kdzhk" [d86b52ee-7d4c-4530-afa1-88cf8ad77379] Running
	I0729 17:19:27.664695   29751 system_pods.go:89] "kindnet-z9cvz" [c2177daa-4efb-478c-845f-f30e77e91684] Running
	I0729 17:19:27.664700   29751 system_pods.go:89] "kube-apiserver-ha-900414" [2a4045e8-a900-4ebd-b36e-95083ab251c9] Running
	I0729 17:19:27.664706   29751 system_pods.go:89] "kube-apiserver-ha-900414-m02" [28c2e5cf-876b-4b77-b9c7-406642dc4df6] Running
	I0729 17:19:27.664710   29751 system_pods.go:89] "kube-apiserver-ha-900414-m03" [2d5328fb-f6d2-4efc-ab72-0395e6500f21] Running
	I0729 17:19:27.664716   29751 system_pods.go:89] "kube-controller-manager-ha-900414" [62bb9ded-db08-49a0-aea4-8806d0e8d294] Running
	I0729 17:19:27.664720   29751 system_pods.go:89] "kube-controller-manager-ha-900414-m02" [88418c96-4611-4276-91c6-ae9b67d4ae74] Running
	I0729 17:19:27.664727   29751 system_pods.go:89] "kube-controller-manager-ha-900414-m03" [f8b5466c-1783-4f30-b3d1-f5034f7f52af] Running
	I0729 17:19:27.664731   29751 system_pods.go:89] "kube-proxy-bgq99" [0258cc44-f6ff-4294-a621-61b172247e15] Running
	I0729 17:19:27.664736   29751 system_pods.go:89] "kube-proxy-tng4t" [2303269f-50d3-4a63-aa76-891f001e6f5d] Running
	I0729 17:19:27.664740   29751 system_pods.go:89] "kube-proxy-wnfsb" [0322d88f-c31b-4cc7-b073-2f97ab9e047a] Running
	I0729 17:19:27.664746   29751 system_pods.go:89] "kube-scheduler-ha-900414" [3d41b818-c8ad-4dbb-bc7b-73f578d33539] Running
	I0729 17:19:27.664750   29751 system_pods.go:89] "kube-scheduler-ha-900414-m02" [f9cc318d-be18-4858-9712-b92f11027b65] Running
	I0729 17:19:27.664755   29751 system_pods.go:89] "kube-scheduler-ha-900414-m03" [7787c02c-b8dc-435f-9e58-52108a528291] Running
	I0729 17:19:27.664759   29751 system_pods.go:89] "kube-vip-ha-900414" [bf3918b4-6cc5-499b-808e-b6c33138cae2] Running
	I0729 17:19:27.664765   29751 system_pods.go:89] "kube-vip-ha-900414-m02" [9fad8ffb-6d3c-44ba-9700-e0e4d70a5f71] Running
	I0729 17:19:27.664768   29751 system_pods.go:89] "kube-vip-ha-900414-m03" [78c34b31-b4d7-4311-9c22-32a2f8fdd948] Running
	I0729 17:19:27.664774   29751 system_pods.go:89] "storage-provisioner" [50fa96e8-1ee5-4e09-a734-802dbcd02bcc] Running
	I0729 17:19:27.664779   29751 system_pods.go:126] duration metric: took 209.703827ms to wait for k8s-apps to be running ...
	I0729 17:19:27.664788   29751 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 17:19:27.664831   29751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:19:27.681971   29751 system_svc.go:56] duration metric: took 17.176262ms WaitForService to wait for kubelet
	I0729 17:19:27.681994   29751 kubeadm.go:582] duration metric: took 20.209639319s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:19:27.682013   29751 node_conditions.go:102] verifying NodePressure condition ...
	I0729 17:19:27.851321   29751 request.go:629] Waited for 169.243338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes
	I0729 17:19:27.851403   29751 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes
	I0729 17:19:27.851412   29751 round_trippers.go:469] Request Headers:
	I0729 17:19:27.851423   29751 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:19:27.851429   29751 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:19:27.855049   29751 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:19:27.856452   29751 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:19:27.856489   29751 node_conditions.go:123] node cpu capacity is 2
	I0729 17:19:27.856510   29751 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:19:27.856516   29751 node_conditions.go:123] node cpu capacity is 2
	I0729 17:19:27.856523   29751 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:19:27.856527   29751 node_conditions.go:123] node cpu capacity is 2
	I0729 17:19:27.856532   29751 node_conditions.go:105] duration metric: took 174.51382ms to run NodePressure ...
	I0729 17:19:27.856545   29751 start.go:241] waiting for startup goroutines ...
	I0729 17:19:27.856563   29751 start.go:255] writing updated cluster config ...
	I0729 17:19:27.856897   29751 ssh_runner.go:195] Run: rm -f paused
	I0729 17:19:27.905803   29751 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 17:19:27.908595   29751 out.go:177] * Done! kubectl is now configured to use "ha-900414" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.451051696Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35eb288a-5c47-4e5c-9474-49d0db07ac9a name=/runtime.v1.RuntimeService/Version
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.452738592Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=00a525ac-eba4-4c34-8122-2bc1daf622cb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.453291563Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722273847453269108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00a525ac-eba4-4c34-8122-2bc1daf622cb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.453833372Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1913b5d7-ee2c-47b0-8bfd-943be0f553b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.453888912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1913b5d7-ee2c-47b0-8bfd-943be0f553b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.454223539Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:174e5d31268c70a798e1fa1fe5d2845d98eaed228a11b55810b7ca4680256a8e,PodSandboxId:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722273570293151902,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annotations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127,PodSandboxId:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722273433298849093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2,PodSandboxId:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722273433248509110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b419192dc8add024f08c798a5f50d7c6bd2ee0ae8a2280771508aebc78e20217,PodSandboxId:e8c37c9dd56b7d7518c4f43dfb13701b15d68f51590bd8e492cf12524a18465e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722273433176341595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38,PodSandboxId:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722273421363177930,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9,PodSandboxId:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227341
7715021107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426b48b0fdbff14ce36fc0396074186cbd51533c984e6fac5f3f963bce611059,PodSandboxId:0c097128258009ad8eecfd45367bd8c008515e1f5f2371df23a13194dbe2a20c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222733999
19223743,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08540c30100787432ed84b2f9dea411c,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf,PodSandboxId:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722273397639690260,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b,PodSandboxId:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722273397619697045,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:270db6978c4e4bce98a1f424ce50f66507840c818ab639d9ef02e8f96bab41d6,PodSandboxId:8d445686f72b1716e0c253ae52b4d355d100be01307847cdf5d5287ddbb9e25b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722273397549024916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd71b5556931becb81321807072e3a8100ce3344e4dea3237c6918a6c8e98cc5,PodSandboxId:49589b3e6647a2c3217bb88a13f7c1a69fce9ea3ae44163d31496bd19c36d434,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722273397500791848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1913b5d7-ee2c-47b0-8bfd-943be0f553b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.496245353Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ae92e8f-bfa4-4c0c-a912-a4ff828a8de0 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.496322217Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ae92e8f-bfa4-4c0c-a912-a4ff828a8de0 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.497699231Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c2b157c-bd9e-46de-9a57-26ac46bcebf0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.498539042Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722273847498506440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c2b157c-bd9e-46de-9a57-26ac46bcebf0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.499112158Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2298d50-b2a4-4078-ab48-9b6a53e252fb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.499169448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2298d50-b2a4-4078-ab48-9b6a53e252fb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.500095821Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:174e5d31268c70a798e1fa1fe5d2845d98eaed228a11b55810b7ca4680256a8e,PodSandboxId:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722273570293151902,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annotations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127,PodSandboxId:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722273433298849093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2,PodSandboxId:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722273433248509110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b419192dc8add024f08c798a5f50d7c6bd2ee0ae8a2280771508aebc78e20217,PodSandboxId:e8c37c9dd56b7d7518c4f43dfb13701b15d68f51590bd8e492cf12524a18465e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722273433176341595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38,PodSandboxId:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722273421363177930,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9,PodSandboxId:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227341
7715021107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426b48b0fdbff14ce36fc0396074186cbd51533c984e6fac5f3f963bce611059,PodSandboxId:0c097128258009ad8eecfd45367bd8c008515e1f5f2371df23a13194dbe2a20c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222733999
19223743,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08540c30100787432ed84b2f9dea411c,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf,PodSandboxId:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722273397639690260,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b,PodSandboxId:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722273397619697045,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:270db6978c4e4bce98a1f424ce50f66507840c818ab639d9ef02e8f96bab41d6,PodSandboxId:8d445686f72b1716e0c253ae52b4d355d100be01307847cdf5d5287ddbb9e25b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722273397549024916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd71b5556931becb81321807072e3a8100ce3344e4dea3237c6918a6c8e98cc5,PodSandboxId:49589b3e6647a2c3217bb88a13f7c1a69fce9ea3ae44163d31496bd19c36d434,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722273397500791848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2298d50-b2a4-4078-ab48-9b6a53e252fb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.538260036Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=a37854e9-5bfa-4f93-b1ed-90fe4b3a0a2f name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.538554053Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-4fv4t,Uid:bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722273569137500123,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T17:19:28.819287945Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8c37c9dd56b7d7518c4f43dfb13701b15d68f51590bd8e492cf12524a18465e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:50fa96e8-1ee5-4e09-a734-802dbcd02bcc,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1722273432997411636,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T17:17:12.676633839Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-48j6w,Uid:306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722273432996030978,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T17:17:12.683419417Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-9r87x,Uid:fcc4709f-f07b-4694-a352-aedd9c67bbb2,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1722273432994378004,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T17:17:12.685085307Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&PodSandboxMetadata{Name:kube-proxy-tng4t,Uid:2303269f-50d3-4a63-aa76-891f001e6f5d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722273417393407752,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-07-29T17:16:57.065349724Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&PodSandboxMetadata{Name:kindnet-z9cvz,Uid:c2177daa-4efb-478c-845f-f30e77e91684,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722273417378823740,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T17:16:57.073057520Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-900414,Uid:37ba63e9544003a32c61ae2cfa7bb117,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1722273397317194519,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 37ba63e9544003a32c61ae2cfa7bb117,kubernetes.io/config.seen: 2024-07-29T17:16:36.830613745Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0c097128258009ad8eecfd45367bd8c008515e1f5f2371df23a13194dbe2a20c,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-900414,Uid:08540c30100787432ed84b2f9dea411c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722273397316365291,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08540c30100787432ed84b2f9dea411c,},Annotations:map[string]string{kubernetes.io/config.hash: 0854
0c30100787432ed84b2f9dea411c,kubernetes.io/config.seen: 2024-07-29T17:16:36.830614850Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&PodSandboxMetadata{Name:etcd-ha-900414,Uid:0c283b6b662036e086a0948631d339c9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722273397305462204,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.114:2379,kubernetes.io/config.hash: 0c283b6b662036e086a0948631d339c9,kubernetes.io/config.seen: 2024-07-29T17:16:36.830615916Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8d445686f72b1716e0c253ae52b4d355d100be01307847cdf5d5287ddbb9e25b,Metadata:&PodSandboxMetadata{Name:kube-c
ontroller-manager-ha-900414,Uid:188869688c2292cb440067d4b4cfa9f3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722273397293039085,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 188869688c2292cb440067d4b4cfa9f3,kubernetes.io/config.seen: 2024-07-29T17:16:36.830612273Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:49589b3e6647a2c3217bb88a13f7c1a69fce9ea3ae44163d31496bd19c36d434,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-900414,Uid:3dc461575e1c166c1aa8b00d38af205a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722273397291397183,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.114:8443,kubernetes.io/config.hash: 3dc461575e1c166c1aa8b00d38af205a,kubernetes.io/config.seen: 2024-07-29T17:16:36.830604895Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a37854e9-5bfa-4f93-b1ed-90fe4b3a0a2f name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.539589909Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41871c15-3279-4220-90e2-d9d34f776bb3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.539650003Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41871c15-3279-4220-90e2-d9d34f776bb3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.539880694Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:174e5d31268c70a798e1fa1fe5d2845d98eaed228a11b55810b7ca4680256a8e,PodSandboxId:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722273570293151902,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annotations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127,PodSandboxId:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722273433298849093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2,PodSandboxId:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722273433248509110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b419192dc8add024f08c798a5f50d7c6bd2ee0ae8a2280771508aebc78e20217,PodSandboxId:e8c37c9dd56b7d7518c4f43dfb13701b15d68f51590bd8e492cf12524a18465e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722273433176341595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38,PodSandboxId:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722273421363177930,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9,PodSandboxId:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227341
7715021107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426b48b0fdbff14ce36fc0396074186cbd51533c984e6fac5f3f963bce611059,PodSandboxId:0c097128258009ad8eecfd45367bd8c008515e1f5f2371df23a13194dbe2a20c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222733999
19223743,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08540c30100787432ed84b2f9dea411c,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf,PodSandboxId:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722273397639690260,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b,PodSandboxId:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722273397619697045,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:270db6978c4e4bce98a1f424ce50f66507840c818ab639d9ef02e8f96bab41d6,PodSandboxId:8d445686f72b1716e0c253ae52b4d355d100be01307847cdf5d5287ddbb9e25b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722273397549024916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd71b5556931becb81321807072e3a8100ce3344e4dea3237c6918a6c8e98cc5,PodSandboxId:49589b3e6647a2c3217bb88a13f7c1a69fce9ea3ae44163d31496bd19c36d434,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722273397500791848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41871c15-3279-4220-90e2-d9d34f776bb3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.546891964Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63a95d54-f3fe-4b7c-872d-ac7262aa3cfb name=/runtime.v1.RuntimeService/Version
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.547055355Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63a95d54-f3fe-4b7c-872d-ac7262aa3cfb name=/runtime.v1.RuntimeService/Version
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.548479300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a97fac4-9664-4f70-99c1-102aa8717408 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.548895044Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722273847548876373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a97fac4-9664-4f70-99c1-102aa8717408 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.550029307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6fabd6c3-9339-45e0-a7d8-2b7ada6b2296 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.550101204Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6fabd6c3-9339-45e0-a7d8-2b7ada6b2296 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:24:07 ha-900414 crio[684]: time="2024-07-29 17:24:07.550350541Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:174e5d31268c70a798e1fa1fe5d2845d98eaed228a11b55810b7ca4680256a8e,PodSandboxId:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722273570293151902,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annotations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127,PodSandboxId:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722273433298849093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2,PodSandboxId:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722273433248509110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b419192dc8add024f08c798a5f50d7c6bd2ee0ae8a2280771508aebc78e20217,PodSandboxId:e8c37c9dd56b7d7518c4f43dfb13701b15d68f51590bd8e492cf12524a18465e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722273433176341595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38,PodSandboxId:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722273421363177930,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9,PodSandboxId:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227341
7715021107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426b48b0fdbff14ce36fc0396074186cbd51533c984e6fac5f3f963bce611059,PodSandboxId:0c097128258009ad8eecfd45367bd8c008515e1f5f2371df23a13194dbe2a20c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222733999
19223743,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08540c30100787432ed84b2f9dea411c,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf,PodSandboxId:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722273397639690260,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b,PodSandboxId:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722273397619697045,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:270db6978c4e4bce98a1f424ce50f66507840c818ab639d9ef02e8f96bab41d6,PodSandboxId:8d445686f72b1716e0c253ae52b4d355d100be01307847cdf5d5287ddbb9e25b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722273397549024916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd71b5556931becb81321807072e3a8100ce3344e4dea3237c6918a6c8e98cc5,PodSandboxId:49589b3e6647a2c3217bb88a13f7c1a69fce9ea3ae44163d31496bd19c36d434,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722273397500791848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6fabd6c3-9339-45e0-a7d8-2b7ada6b2296 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	174e5d31268c7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   7d2a64a5bcccd       busybox-fc5497c4f-4fv4t
	7d7ffaf9ef2fd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   7a0bb58ad2b90       coredns-7db6d8ff4d-9r87x
	911569fe2373d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   f47facc78da61       coredns-7db6d8ff4d-48j6w
	b419192dc8add       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   e8c37c9dd56b7       storage-provisioner
	10b182b72bc50       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   30715fa1b9f02       kindnet-z9cvz
	37ef29620e9c9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   250f31f0996e1       kube-proxy-tng4t
	426b48b0fdbff       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   0c09712825800       kube-vip-ha-900414
	a7721018288f9       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   e2a054b42822a       kube-scheduler-ha-900414
	2a27f5a54bd43       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   46030b1ba43cf       etcd-ha-900414
	270db6978c4e4       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   8d445686f72b1       kube-controller-manager-ha-900414
	dd71b5556931b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   49589b3e6647a       kube-apiserver-ha-900414
	
	
	==> coredns [7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127] <==
	[INFO] 10.244.2.2:33776 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00179506s
	[INFO] 10.244.0.4:52013 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177704s
	[INFO] 10.244.0.4:35837 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126238s
	[INFO] 10.244.0.4:49524 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118581s
	[INFO] 10.244.1.2:48270 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204422s
	[INFO] 10.244.1.2:35645 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001855497s
	[INFO] 10.244.1.2:43192 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177303s
	[INFO] 10.244.1.2:33281 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160789s
	[INFO] 10.244.1.2:57013 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097416s
	[INFO] 10.244.2.2:38166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136029s
	[INFO] 10.244.2.2:33640 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001913014s
	[INFO] 10.244.2.2:47485 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104905s
	[INFO] 10.244.2.2:45778 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170534s
	[INFO] 10.244.2.2:59234 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076101s
	[INFO] 10.244.0.4:50535 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065536s
	[INFO] 10.244.1.2:58622 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133396s
	[INFO] 10.244.1.2:33438 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102338s
	[INFO] 10.244.2.2:45926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000383812s
	[INFO] 10.244.2.2:56980 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000187545s
	[INFO] 10.244.2.2:43137 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00016801s
	[INFO] 10.244.0.4:57612 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000159389s
	[INFO] 10.244.1.2:58047 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014126s
	[INFO] 10.244.1.2:45045 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123813s
	[INFO] 10.244.2.2:35311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173973s
	[INFO] 10.244.2.2:47044 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140928s
	
	
	==> coredns [911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2] <==
	[INFO] 10.244.2.2:54591 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000473416s
	[INFO] 10.244.0.4:51118 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 1.682006305s
	[INFO] 10.244.0.4:48566 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231629s
	[INFO] 10.244.0.4:43462 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000157968s
	[INFO] 10.244.0.4:46703 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.079065283s
	[INFO] 10.244.0.4:43001 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165887s
	[INFO] 10.244.1.2:43677 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128129s
	[INFO] 10.244.1.2:39513 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001354968s
	[INFO] 10.244.1.2:52828 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183362s
	[INFO] 10.244.2.2:51403 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116578s
	[INFO] 10.244.2.2:47706 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001162998s
	[INFO] 10.244.2.2:39349 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083497s
	[INFO] 10.244.0.4:43643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164666s
	[INFO] 10.244.0.4:51941 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067553s
	[INFO] 10.244.0.4:33186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117492s
	[INFO] 10.244.1.2:36002 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170421s
	[INFO] 10.244.1.2:41186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135424s
	[INFO] 10.244.2.2:40469 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015464s
	[INFO] 10.244.0.4:58750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131521s
	[INFO] 10.244.0.4:59782 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141641s
	[INFO] 10.244.0.4:47289 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000189592s
	[INFO] 10.244.1.2:44743 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121922s
	[INFO] 10.244.1.2:60901 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099491s
	[INFO] 10.244.2.2:53612 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000143831s
	[INFO] 10.244.2.2:35693 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120049s
	
	
	==> describe nodes <==
	Name:               ha-900414
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-900414
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=ha-900414
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T17_16_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:16:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-900414
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:24:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:19:47 +0000   Mon, 29 Jul 2024 17:16:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:19:47 +0000   Mon, 29 Jul 2024 17:16:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:19:47 +0000   Mon, 29 Jul 2024 17:16:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:19:47 +0000   Mon, 29 Jul 2024 17:17:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    ha-900414
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0301ef966ab4d039cde4e4959e83ea6
	  System UUID:                d0301ef9-66ab-4d03-9cde-4e4959e83ea6
	  Boot ID:                    ea7d1983-2f49-4874-b67f-d8eea13c27d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4fv4t              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 coredns-7db6d8ff4d-48j6w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m10s
	  kube-system                 coredns-7db6d8ff4d-9r87x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m10s
	  kube-system                 etcd-ha-900414                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m24s
	  kube-system                 kindnet-z9cvz                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m10s
	  kube-system                 kube-apiserver-ha-900414             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	  kube-system                 kube-controller-manager-ha-900414    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	  kube-system                 kube-proxy-tng4t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  kube-system                 kube-scheduler-ha-900414             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	  kube-system                 kube-vip-ha-900414                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m9s   kube-proxy       
	  Normal  Starting                 7m24s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m24s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m24s  kubelet          Node ha-900414 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m24s  kubelet          Node ha-900414 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m24s  kubelet          Node ha-900414 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m11s  node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	  Normal  NodeReady                6m55s  kubelet          Node ha-900414 status is now: NodeReady
	  Normal  RegisteredNode           5m57s  node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	  Normal  RegisteredNode           4m45s  node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	
	
	Name:               ha-900414-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-900414-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=ha-900414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_17_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:17:51 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-900414-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:20:45 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 17:19:53 +0000   Mon, 29 Jul 2024 17:21:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 17:19:53 +0000   Mon, 29 Jul 2024 17:21:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 17:19:53 +0000   Mon, 29 Jul 2024 17:21:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 17:19:53 +0000   Mon, 29 Jul 2024 17:21:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    ha-900414-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 854b5d80a28944e1a0d7e90a65ef964f
	  System UUID:                854b5d80-a289-44e1-a0d7-e90a65ef964f
	  Boot ID:                    b75c3f88-64bd-447d-a8b3-d30def6f548b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dqz55                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 etcd-ha-900414-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m14s
	  kube-system                 kindnet-kdzhk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m16s
	  kube-system                 kube-apiserver-ha-900414-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-controller-manager-ha-900414-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-proxy-bgq99                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	  kube-system                 kube-scheduler-ha-900414-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-vip-ha-900414-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m12s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  6m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	  Normal  NodeHasSufficientMemory  6m16s (x8 over 6m17s)  kubelet          Node ha-900414-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m16s (x8 over 6m17s)  kubelet          Node ha-900414-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m16s (x7 over 6m17s)  kubelet          Node ha-900414-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m57s                  node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	  Normal  RegisteredNode           4m45s                  node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	  Normal  NodeNotReady             2m41s                  node-controller  Node ha-900414-m02 status is now: NodeNotReady
	
	
	Name:               ha-900414-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-900414-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=ha-900414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_19_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:19:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-900414-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:23:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:19:32 +0000   Mon, 29 Jul 2024 17:19:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:19:32 +0000   Mon, 29 Jul 2024 17:19:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:19:32 +0000   Mon, 29 Jul 2024 17:19:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:19:32 +0000   Mon, 29 Jul 2024 17:19:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    ha-900414-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a83fa48485e44a66899d03b0bc3026ab
	  System UUID:                a83fa484-85e4-4a66-899d-03b0bc3026ab
	  Boot ID:                    b5b7f427-05a9-48d1-b8b4-44023d1602b3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s9sz8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 etcd-ha-900414-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m3s
	  kube-system                 kindnet-6vzd2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m5s
	  kube-system                 kube-apiserver-ha-900414-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-controller-manager-ha-900414-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-proxy-wnfsb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-scheduler-ha-900414-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-vip-ha-900414-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m5s (x8 over 5m5s)  kubelet          Node ha-900414-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s (x8 over 5m5s)  kubelet          Node ha-900414-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s (x7 over 5m5s)  kubelet          Node ha-900414-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m2s                 node-controller  Node ha-900414-m03 event: Registered Node ha-900414-m03 in Controller
	  Normal  RegisteredNode           5m1s                 node-controller  Node ha-900414-m03 event: Registered Node ha-900414-m03 in Controller
	  Normal  RegisteredNode           4m45s                node-controller  Node ha-900414-m03 event: Registered Node ha-900414-m03 in Controller
	
	
	Name:               ha-900414-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-900414-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=ha-900414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_20_07_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:20:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-900414-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:24:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:20:37 +0000   Mon, 29 Jul 2024 17:20:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:20:37 +0000   Mon, 29 Jul 2024 17:20:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:20:37 +0000   Mon, 29 Jul 2024 17:20:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:20:37 +0000   Mon, 29 Jul 2024 17:20:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    ha-900414-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 82b534ad740b47cbae65e1e5acf41d9a
	  System UUID:                82b534ad-740b-47cb-ae65-e1e5acf41d9a
	  Boot ID:                    0dc8577a-0725-49e3-80b3-d7aff48b060d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4fsvj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m1s
	  kube-system                 kube-proxy-hf5lx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m1s (x3 over 4m1s)  kubelet          Node ha-900414-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x3 over 4m1s)  kubelet          Node ha-900414-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x3 over 4m1s)  kubelet          Node ha-900414-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal  RegisteredNode           3m57s                node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal  RegisteredNode           3m56s                node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal  NodeReady                3m42s                kubelet          Node ha-900414-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul29 17:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050825] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040072] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.771696] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.434213] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.575347] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.778112] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.061146] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062274] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.167660] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.152034] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.281641] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.241432] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +5.177057] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.055961] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.040447] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.087819] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.082632] kauditd_printk_skb: 21 callbacks suppressed
	[Jul29 17:17] kauditd_printk_skb: 38 callbacks suppressed
	[ +45.217420] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b] <==
	{"level":"warn","ts":"2024-07-29T17:24:07.827375Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.834665Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.839266Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.84102Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.851862Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.859192Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.865462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.86991Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.873416Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.882339Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.891038Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.896876Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.900324Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.903295Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.909531Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.915785Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.921529Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.924812Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.927986Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.933669Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.939659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.941055Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.945655Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.96127Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:24:07.962882Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:24:08 up 8 min,  0 users,  load average: 0.25, 0.21, 0.11
	Linux ha-900414 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38] <==
	I0729 17:23:32.576053       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:23:42.579858       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:23:42.579898       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:23:42.580169       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:23:42.580196       1 main.go:299] handling current node
	I0729 17:23:42.580226       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:23:42.580246       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:23:42.580310       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0729 17:23:42.580332       1 main.go:322] Node ha-900414-m03 has CIDR [10.244.2.0/24] 
	I0729 17:23:52.577088       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:23:52.577222       1 main.go:299] handling current node
	I0729 17:23:52.577260       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:23:52.577287       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:23:52.577453       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0729 17:23:52.577508       1 main.go:322] Node ha-900414-m03 has CIDR [10.244.2.0/24] 
	I0729 17:23:52.577601       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:23:52.577626       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:24:02.570193       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:24:02.570642       1 main.go:299] handling current node
	I0729 17:24:02.570755       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:24:02.570781       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:24:02.571002       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0729 17:24:02.571044       1 main.go:322] Node ha-900414-m03 has CIDR [10.244.2.0/24] 
	I0729 17:24:02.571178       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:24:02.571214       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [dd71b5556931becb81321807072e3a8100ce3344e4dea3237c6918a6c8e98cc5] <==
	I0729 17:16:56.589158       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0729 17:16:57.040569       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0729 17:19:31.384720       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45810: use of closed network connection
	E0729 17:19:31.571363       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45816: use of closed network connection
	E0729 17:19:31.761636       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45832: use of closed network connection
	E0729 17:19:33.691188       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45858: use of closed network connection
	E0729 17:19:33.869730       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45868: use of closed network connection
	E0729 17:19:34.044313       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45892: use of closed network connection
	E0729 17:19:34.218790       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45908: use of closed network connection
	E0729 17:19:34.404623       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45926: use of closed network connection
	E0729 17:19:34.600229       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45946: use of closed network connection
	E0729 17:19:34.890385       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45968: use of closed network connection
	E0729 17:19:35.068689       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45988: use of closed network connection
	E0729 17:19:35.247808       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46000: use of closed network connection
	E0729 17:19:35.438480       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46016: use of closed network connection
	E0729 17:19:35.616766       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46036: use of closed network connection
	E0729 17:19:35.788203       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46054: use of closed network connection
	E0729 17:20:07.236783       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0729 17:20:07.237465       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0729 17:20:07.237564       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 53.071µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0729 17:20:07.237641       1 wrap.go:54] timeout or abort while handling: method=GET URI="/api/v1/nodes/ha-900414-m04?timeout=10s" audit-ID="7f040c2d-02c2-4f9f-aecf-dc7d5824210b"
	E0729 17:20:07.237680       1 timeout.go:142] post-timeout activity - time-elapsed: 4.344µs, GET "/api/v1/nodes/ha-900414-m04" result: <nil>
	E0729 17:20:07.237867       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0729 17:20:07.238860       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0729 17:20:07.239080       1 timeout.go:142] post-timeout activity - time-elapsed: 1.768546ms, PATCH "/api/v1/namespaces/default/events/ha-900414-m04.17e6beb84d7e621f" result: <nil>
	
	
	==> kube-controller-manager [270db6978c4e4bce98a1f424ce50f66507840c818ab639d9ef02e8f96bab41d6] <==
	I0729 17:17:51.236250       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-900414-m02"
	I0729 17:19:02.587222       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-900414-m03\" does not exist"
	I0729 17:19:02.619098       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-900414-m03" podCIDRs=["10.244.2.0/24"]
	I0729 17:19:06.272231       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-900414-m03"
	I0729 17:19:28.821509       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.230041ms"
	I0729 17:19:28.901804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.604594ms"
	I0729 17:19:29.055835       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="153.722953ms"
	E0729 17:19:29.056163       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0729 17:19:29.274045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="217.381236ms"
	I0729 17:19:29.320884       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.624829ms"
	I0729 17:19:29.321581       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.563µs"
	I0729 17:19:30.511849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.847622ms"
	I0729 17:19:30.512131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.623µs"
	I0729 17:19:30.575143       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.563726ms"
	I0729 17:19:30.575478       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.565µs"
	I0729 17:19:30.944109       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.524427ms"
	I0729 17:19:30.944603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.801µs"
	E0729 17:20:06.581048       1 certificate_controller.go:146] Sync csr-gb6wr failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-gb6wr": the object has been modified; please apply your changes to the latest version and try again
	I0729 17:20:06.814275       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-900414-m04\" does not exist"
	I0729 17:20:06.986373       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-900414-m04" podCIDRs=["10.244.3.0/24"]
	I0729 17:20:11.300187       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-900414-m04"
	I0729 17:20:25.165063       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-900414-m04"
	I0729 17:21:26.327741       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-900414-m04"
	I0729 17:21:26.488890       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.445665ms"
	I0729 17:21:26.489088       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.001µs"
	
	
	==> kube-proxy [37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9] <==
	I0729 17:16:58.224057       1 server_linux.go:69] "Using iptables proxy"
	I0729 17:16:58.277839       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.114"]
	I0729 17:16:58.378129       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 17:16:58.378189       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 17:16:58.378211       1 server_linux.go:165] "Using iptables Proxier"
	I0729 17:16:58.386893       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 17:16:58.387187       1 server.go:872] "Version info" version="v1.30.3"
	I0729 17:16:58.387218       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:16:58.391582       1 config.go:192] "Starting service config controller"
	I0729 17:16:58.392030       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 17:16:58.392111       1 config.go:101] "Starting endpoint slice config controller"
	I0729 17:16:58.392133       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 17:16:58.394091       1 config.go:319] "Starting node config controller"
	I0729 17:16:58.394116       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 17:16:58.492785       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 17:16:58.492841       1 shared_informer.go:320] Caches are synced for service config
	I0729 17:16:58.495011       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf] <==
	E0729 17:16:42.148247       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 17:16:42.554255       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 17:16:42.554382       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 17:16:44.341514       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 17:19:28.758828       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="c02b335b-93e1-41d5-b53c-fc95bf6ecd59" pod="default/busybox-fc5497c4f-dqz55" assumedNode="ha-900414-m02" currentNode="ha-900414-m03"
	E0729 17:19:28.768612       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-dqz55\": pod busybox-fc5497c4f-dqz55 is already assigned to node \"ha-900414-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-dqz55" node="ha-900414-m03"
	E0729 17:19:28.768738       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c02b335b-93e1-41d5-b53c-fc95bf6ecd59(default/busybox-fc5497c4f-dqz55) was assumed on ha-900414-m03 but assigned to ha-900414-m02" pod="default/busybox-fc5497c4f-dqz55"
	E0729 17:19:28.768763       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-dqz55\": pod busybox-fc5497c4f-dqz55 is already assigned to node \"ha-900414-m02\"" pod="default/busybox-fc5497c4f-dqz55"
	I0729 17:19:28.768804       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-dqz55" node="ha-900414-m02"
	E0729 17:19:28.803899       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-s9sz8\": pod busybox-fc5497c4f-s9sz8 is already assigned to node \"ha-900414-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-s9sz8" node="ha-900414-m03"
	E0729 17:19:28.804818       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0a2e4648-8455-4ecc-bfcc-5642bfdbb2fe(default/busybox-fc5497c4f-s9sz8) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-s9sz8"
	E0729 17:19:28.805238       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-s9sz8\": pod busybox-fc5497c4f-s9sz8 is already assigned to node \"ha-900414-m03\"" pod="default/busybox-fc5497c4f-s9sz8"
	I0729 17:19:28.805313       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-s9sz8" node="ha-900414-m03"
	E0729 17:19:28.839698       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-4fv4t\": pod busybox-fc5497c4f-4fv4t is already assigned to node \"ha-900414\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-4fv4t" node="ha-900414"
	E0729 17:19:28.840078       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6(default/busybox-fc5497c4f-4fv4t) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-4fv4t"
	E0729 17:19:28.840422       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-4fv4t\": pod busybox-fc5497c4f-4fv4t is already assigned to node \"ha-900414\"" pod="default/busybox-fc5497c4f-4fv4t"
	I0729 17:19:28.840664       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-4fv4t" node="ha-900414"
	E0729 17:20:07.262272       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hf5lx\": pod kube-proxy-hf5lx is already assigned to node \"ha-900414-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hf5lx" node="ha-900414-m04"
	E0729 17:20:07.263149       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hf5lx\": pod kube-proxy-hf5lx is already assigned to node \"ha-900414-m04\"" pod="kube-system/kube-proxy-hf5lx"
	E0729 17:20:07.264308       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4fsvj\": pod kindnet-4fsvj is already assigned to node \"ha-900414-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4fsvj" node="ha-900414-m04"
	E0729 17:20:07.264446       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4fsvj\": pod kindnet-4fsvj is already assigned to node \"ha-900414-m04\"" pod="kube-system/kindnet-4fsvj"
	E0729 17:20:07.308186       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rbc8g\": pod kindnet-rbc8g is already assigned to node \"ha-900414-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rbc8g" node="ha-900414-m04"
	E0729 17:20:07.308315       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod fa8621a0-f2ea-48fe-8912-76fdd3bd193f(kube-system/kindnet-rbc8g) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rbc8g"
	E0729 17:20:07.309175       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rbc8g\": pod kindnet-rbc8g is already assigned to node \"ha-900414-m04\"" pod="kube-system/kindnet-rbc8g"
	I0729 17:20:07.309262       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rbc8g" node="ha-900414-m04"
	
	
	==> kubelet <==
	Jul 29 17:19:43 ha-900414 kubelet[1377]: E0729 17:19:43.789901    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:19:43 ha-900414 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:19:43 ha-900414 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:19:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:19:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:20:43 ha-900414 kubelet[1377]: E0729 17:20:43.786522    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:20:43 ha-900414 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:20:43 ha-900414 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:20:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:20:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:21:43 ha-900414 kubelet[1377]: E0729 17:21:43.785651    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:21:43 ha-900414 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:21:43 ha-900414 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:21:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:21:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:22:43 ha-900414 kubelet[1377]: E0729 17:22:43.786632    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:22:43 ha-900414 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:22:43 ha-900414 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:22:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:22:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:23:43 ha-900414 kubelet[1377]: E0729 17:23:43.786205    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:23:43 ha-900414 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:23:43 ha-900414 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:23:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:23:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-900414 -n ha-900414
helpers_test.go:261: (dbg) Run:  kubectl --context ha-900414 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (58.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (357.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-900414 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-900414 -v=7 --alsologtostderr
E0729 17:24:52.720209   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-900414 -v=7 --alsologtostderr: exit status 82 (2m1.91909631s)

                                                
                                                
-- stdout --
	* Stopping node "ha-900414-m04"  ...
	* Stopping node "ha-900414-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:24:09.384641   36041 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:24:09.384915   36041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:24:09.384925   36041 out.go:304] Setting ErrFile to fd 2...
	I0729 17:24:09.384929   36041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:24:09.385131   36041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:24:09.385344   36041 out.go:298] Setting JSON to false
	I0729 17:24:09.385429   36041 mustload.go:65] Loading cluster: ha-900414
	I0729 17:24:09.385760   36041 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:24:09.385847   36041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:24:09.386017   36041 mustload.go:65] Loading cluster: ha-900414
	I0729 17:24:09.386142   36041 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:24:09.386167   36041 stop.go:39] StopHost: ha-900414-m04
	I0729 17:24:09.386555   36041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:24:09.386599   36041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:24:09.401082   36041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0729 17:24:09.401496   36041 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:24:09.402025   36041 main.go:141] libmachine: Using API Version  1
	I0729 17:24:09.402049   36041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:24:09.402407   36041 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:24:09.404951   36041 out.go:177] * Stopping node "ha-900414-m04"  ...
	I0729 17:24:09.406171   36041 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 17:24:09.406205   36041 main.go:141] libmachine: (ha-900414-m04) Calling .DriverName
	I0729 17:24:09.406450   36041 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 17:24:09.406475   36041 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHHostname
	I0729 17:24:09.408954   36041 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:24:09.409363   36041 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:19:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:24:09.409396   36041 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:24:09.409472   36041 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHPort
	I0729 17:24:09.409620   36041 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHKeyPath
	I0729 17:24:09.409774   36041 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHUsername
	I0729 17:24:09.409903   36041 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m04/id_rsa Username:docker}
	I0729 17:24:09.497772   36041 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 17:24:09.552606   36041 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 17:24:09.607730   36041 main.go:141] libmachine: Stopping "ha-900414-m04"...
	I0729 17:24:09.607760   36041 main.go:141] libmachine: (ha-900414-m04) Calling .GetState
	I0729 17:24:09.609121   36041 main.go:141] libmachine: (ha-900414-m04) Calling .Stop
	I0729 17:24:09.613286   36041 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 0/120
	I0729 17:24:10.845491   36041 main.go:141] libmachine: (ha-900414-m04) Calling .GetState
	I0729 17:24:10.846870   36041 main.go:141] libmachine: Machine "ha-900414-m04" was stopped.
	I0729 17:24:10.846887   36041 stop.go:75] duration metric: took 1.440715913s to stop
	I0729 17:24:10.846927   36041 stop.go:39] StopHost: ha-900414-m03
	I0729 17:24:10.847232   36041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:24:10.847368   36041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:24:10.862468   36041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37117
	I0729 17:24:10.862870   36041 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:24:10.863320   36041 main.go:141] libmachine: Using API Version  1
	I0729 17:24:10.863338   36041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:24:10.863678   36041 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:24:10.865461   36041 out.go:177] * Stopping node "ha-900414-m03"  ...
	I0729 17:24:10.866566   36041 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 17:24:10.866597   36041 main.go:141] libmachine: (ha-900414-m03) Calling .DriverName
	I0729 17:24:10.866804   36041 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 17:24:10.866830   36041 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHHostname
	I0729 17:24:10.869663   36041 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:24:10.870118   36041 main.go:141] libmachine: (ha-900414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ef:4e", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:18:28 +0000 UTC Type:0 Mac:52:54:00:df:ef:4e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-900414-m03 Clientid:01:52:54:00:df:ef:4e}
	I0729 17:24:10.870143   36041 main.go:141] libmachine: (ha-900414-m03) DBG | domain ha-900414-m03 has defined IP address 192.168.39.6 and MAC address 52:54:00:df:ef:4e in network mk-ha-900414
	I0729 17:24:10.870572   36041 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHPort
	I0729 17:24:10.870723   36041 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHKeyPath
	I0729 17:24:10.870891   36041 main.go:141] libmachine: (ha-900414-m03) Calling .GetSSHUsername
	I0729 17:24:10.871045   36041 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m03/id_rsa Username:docker}
	I0729 17:24:10.957442   36041 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 17:24:11.013724   36041 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 17:24:11.068359   36041 main.go:141] libmachine: Stopping "ha-900414-m03"...
	I0729 17:24:11.068382   36041 main.go:141] libmachine: (ha-900414-m03) Calling .GetState
	I0729 17:24:11.070044   36041 main.go:141] libmachine: (ha-900414-m03) Calling .Stop
	I0729 17:24:11.073638   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 0/120
	I0729 17:24:12.075168   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 1/120
	I0729 17:24:13.076493   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 2/120
	I0729 17:24:14.078061   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 3/120
	I0729 17:24:15.080677   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 4/120
	I0729 17:24:16.082687   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 5/120
	I0729 17:24:17.084157   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 6/120
	I0729 17:24:18.085498   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 7/120
	I0729 17:24:19.087017   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 8/120
	I0729 17:24:20.088532   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 9/120
	I0729 17:24:21.090743   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 10/120
	I0729 17:24:22.092927   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 11/120
	I0729 17:24:23.094470   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 12/120
	I0729 17:24:24.096128   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 13/120
	I0729 17:24:25.097594   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 14/120
	I0729 17:24:26.099513   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 15/120
	I0729 17:24:27.101262   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 16/120
	I0729 17:24:28.102766   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 17/120
	I0729 17:24:29.104218   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 18/120
	I0729 17:24:30.105627   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 19/120
	I0729 17:24:31.107545   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 20/120
	I0729 17:24:32.109035   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 21/120
	I0729 17:24:33.110803   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 22/120
	I0729 17:24:34.112377   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 23/120
	I0729 17:24:35.113996   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 24/120
	I0729 17:24:36.115444   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 25/120
	I0729 17:24:37.116796   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 26/120
	I0729 17:24:38.118385   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 27/120
	I0729 17:24:39.119865   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 28/120
	I0729 17:24:40.121054   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 29/120
	I0729 17:24:41.123002   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 30/120
	I0729 17:24:42.124663   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 31/120
	I0729 17:24:43.126197   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 32/120
	I0729 17:24:44.127666   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 33/120
	I0729 17:24:45.128858   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 34/120
	I0729 17:24:46.130708   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 35/120
	I0729 17:24:47.132196   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 36/120
	I0729 17:24:48.133813   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 37/120
	I0729 17:24:49.135326   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 38/120
	I0729 17:24:50.136526   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 39/120
	I0729 17:24:51.138101   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 40/120
	I0729 17:24:52.139376   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 41/120
	I0729 17:24:53.140566   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 42/120
	I0729 17:24:54.141912   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 43/120
	I0729 17:24:55.143126   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 44/120
	I0729 17:24:56.144682   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 45/120
	I0729 17:24:57.146380   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 46/120
	I0729 17:24:58.148099   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 47/120
	I0729 17:24:59.149434   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 48/120
	I0729 17:25:00.150756   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 49/120
	I0729 17:25:01.152401   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 50/120
	I0729 17:25:02.153741   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 51/120
	I0729 17:25:03.155163   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 52/120
	I0729 17:25:04.156543   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 53/120
	I0729 17:25:05.157933   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 54/120
	I0729 17:25:06.159400   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 55/120
	I0729 17:25:07.160557   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 56/120
	I0729 17:25:08.162056   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 57/120
	I0729 17:25:09.163220   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 58/120
	I0729 17:25:10.164582   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 59/120
	I0729 17:25:11.166281   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 60/120
	I0729 17:25:12.167429   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 61/120
	I0729 17:25:13.168932   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 62/120
	I0729 17:25:14.170153   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 63/120
	I0729 17:25:15.171912   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 64/120
	I0729 17:25:16.174157   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 65/120
	I0729 17:25:17.175345   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 66/120
	I0729 17:25:18.176636   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 67/120
	I0729 17:25:19.177867   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 68/120
	I0729 17:25:20.179205   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 69/120
	I0729 17:25:21.181032   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 70/120
	I0729 17:25:22.182891   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 71/120
	I0729 17:25:23.184309   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 72/120
	I0729 17:25:24.186163   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 73/120
	I0729 17:25:25.187407   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 74/120
	I0729 17:25:26.189282   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 75/120
	I0729 17:25:27.190628   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 76/120
	I0729 17:25:28.192808   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 77/120
	I0729 17:25:29.194339   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 78/120
	I0729 17:25:30.195721   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 79/120
	I0729 17:25:31.196939   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 80/120
	I0729 17:25:32.198335   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 81/120
	I0729 17:25:33.199518   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 82/120
	I0729 17:25:34.200718   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 83/120
	I0729 17:25:35.201952   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 84/120
	I0729 17:25:36.203373   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 85/120
	I0729 17:25:37.204807   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 86/120
	I0729 17:25:38.206082   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 87/120
	I0729 17:25:39.207231   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 88/120
	I0729 17:25:40.208652   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 89/120
	I0729 17:25:41.210935   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 90/120
	I0729 17:25:42.212081   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 91/120
	I0729 17:25:43.213333   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 92/120
	I0729 17:25:44.214587   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 93/120
	I0729 17:25:45.215934   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 94/120
	I0729 17:25:46.217592   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 95/120
	I0729 17:25:47.219504   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 96/120
	I0729 17:25:48.220859   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 97/120
	I0729 17:25:49.222041   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 98/120
	I0729 17:25:50.223216   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 99/120
	I0729 17:25:51.225201   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 100/120
	I0729 17:25:52.226547   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 101/120
	I0729 17:25:53.227749   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 102/120
	I0729 17:25:54.228949   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 103/120
	I0729 17:25:55.230587   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 104/120
	I0729 17:25:56.232172   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 105/120
	I0729 17:25:57.233581   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 106/120
	I0729 17:25:58.234981   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 107/120
	I0729 17:25:59.236383   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 108/120
	I0729 17:26:00.237770   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 109/120
	I0729 17:26:01.239422   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 110/120
	I0729 17:26:02.240939   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 111/120
	I0729 17:26:03.242431   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 112/120
	I0729 17:26:04.243969   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 113/120
	I0729 17:26:05.245861   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 114/120
	I0729 17:26:06.247686   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 115/120
	I0729 17:26:07.249439   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 116/120
	I0729 17:26:08.250789   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 117/120
	I0729 17:26:09.252834   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 118/120
	I0729 17:26:10.254250   36041 main.go:141] libmachine: (ha-900414-m03) Waiting for machine to stop 119/120
	I0729 17:26:11.255036   36041 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 17:26:11.255103   36041 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 17:26:11.257166   36041 out.go:177] 
	W0729 17:26:11.258394   36041 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 17:26:11.258404   36041 out.go:239] * 
	* 
	W0729 17:26:11.261279   36041 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:26:11.262655   36041 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-900414 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-900414 --wait=true -v=7 --alsologtostderr
E0729 17:26:52.902961   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:28:29.677072   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-900414 --wait=true -v=7 --alsologtostderr: (3m52.998011983s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-900414
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-900414 -n ha-900414
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-900414 logs -n 25: (1.831955322s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-900414 cp ha-900414-m03:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m02:/home/docker/cp-test_ha-900414-m03_ha-900414-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414-m02 sudo cat                                          | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m03_ha-900414-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m03:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04:/home/docker/cp-test_ha-900414-m03_ha-900414-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414-m04 sudo cat                                          | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m03_ha-900414-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-900414 cp testdata/cp-test.txt                                                | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3654370545/001/cp-test_ha-900414-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414:/home/docker/cp-test_ha-900414-m04_ha-900414.txt                       |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414 sudo cat                                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m04_ha-900414.txt                                 |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m02:/home/docker/cp-test_ha-900414-m04_ha-900414-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414-m02 sudo cat                                          | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m04_ha-900414-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m03:/home/docker/cp-test_ha-900414-m04_ha-900414-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414-m03 sudo cat                                          | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m04_ha-900414-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-900414 node stop m02 -v=7                                                     | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-900414 node start m02 -v=7                                                    | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-900414 -v=7                                                           | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-900414 -v=7                                                                | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-900414 --wait=true -v=7                                                    | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:26 UTC | 29 Jul 24 17:30 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-900414                                                                | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:30 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 17:26:11
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 17:26:11.306314   36531 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:26:11.306438   36531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:26:11.306448   36531 out.go:304] Setting ErrFile to fd 2...
	I0729 17:26:11.306455   36531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:26:11.306623   36531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:26:11.307186   36531 out.go:298] Setting JSON to false
	I0729 17:26:11.308015   36531 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4123,"bootTime":1722269848,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:26:11.308078   36531 start.go:139] virtualization: kvm guest
	I0729 17:26:11.310432   36531 out.go:177] * [ha-900414] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 17:26:11.311933   36531 notify.go:220] Checking for updates...
	I0729 17:26:11.311948   36531 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 17:26:11.313147   36531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:26:11.314334   36531 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 17:26:11.315592   36531 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:26:11.316841   36531 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 17:26:11.318012   36531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:26:11.319746   36531 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:26:11.319837   36531 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:26:11.320276   36531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:26:11.320310   36531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:26:11.335861   36531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42545
	I0729 17:26:11.336257   36531 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:26:11.336740   36531 main.go:141] libmachine: Using API Version  1
	I0729 17:26:11.336763   36531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:26:11.337062   36531 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:26:11.337265   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:26:11.371036   36531 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 17:26:11.372289   36531 start.go:297] selected driver: kvm2
	I0729 17:26:11.372300   36531 start.go:901] validating driver "kvm2" against &{Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.156 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:26:11.372435   36531 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:26:11.372801   36531 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:26:11.372870   36531 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 17:26:11.386447   36531 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 17:26:11.387089   36531 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:26:11.387148   36531 cni.go:84] Creating CNI manager for ""
	I0729 17:26:11.387162   36531 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 17:26:11.387211   36531 start.go:340] cluster config:
	{Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.156 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:26:11.387312   36531 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:26:11.388877   36531 out.go:177] * Starting "ha-900414" primary control-plane node in "ha-900414" cluster
	I0729 17:26:11.390002   36531 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:26:11.390032   36531 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 17:26:11.390039   36531 cache.go:56] Caching tarball of preloaded images
	I0729 17:26:11.390122   36531 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:26:11.390134   36531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:26:11.390244   36531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:26:11.390492   36531 start.go:360] acquireMachinesLock for ha-900414: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:26:11.390536   36531 start.go:364] duration metric: took 24.749µs to acquireMachinesLock for "ha-900414"
	I0729 17:26:11.390549   36531 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:26:11.390556   36531 fix.go:54] fixHost starting: 
	I0729 17:26:11.390806   36531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:26:11.390834   36531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:26:11.404583   36531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44933
	I0729 17:26:11.404984   36531 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:26:11.405480   36531 main.go:141] libmachine: Using API Version  1
	I0729 17:26:11.405500   36531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:26:11.405803   36531 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:26:11.405962   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:26:11.406118   36531 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:26:11.407521   36531 fix.go:112] recreateIfNeeded on ha-900414: state=Running err=<nil>
	W0729 17:26:11.407561   36531 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:26:11.409223   36531 out.go:177] * Updating the running kvm2 "ha-900414" VM ...
	I0729 17:26:11.410314   36531 machine.go:94] provisionDockerMachine start ...
	I0729 17:26:11.410330   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:26:11.410540   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:26:11.413424   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.413930   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:26:11.413964   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.414168   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:26:11.414325   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:11.414479   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:11.414638   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:26:11.414787   36531 main.go:141] libmachine: Using SSH client type: native
	I0729 17:26:11.415015   36531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:26:11.415031   36531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 17:26:11.523581   36531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-900414
	
	I0729 17:26:11.523609   36531 main.go:141] libmachine: (ha-900414) Calling .GetMachineName
	I0729 17:26:11.523807   36531 buildroot.go:166] provisioning hostname "ha-900414"
	I0729 17:26:11.523827   36531 main.go:141] libmachine: (ha-900414) Calling .GetMachineName
	I0729 17:26:11.524056   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:26:11.526693   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.527104   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:26:11.527130   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.527259   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:26:11.527493   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:11.527635   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:11.527804   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:26:11.527993   36531 main.go:141] libmachine: Using SSH client type: native
	I0729 17:26:11.528179   36531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:26:11.528199   36531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-900414 && echo "ha-900414" | sudo tee /etc/hostname
	I0729 17:26:11.652853   36531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-900414
	
	I0729 17:26:11.652889   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:26:11.655382   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.655722   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:26:11.655762   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.655943   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:26:11.656144   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:11.656282   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:11.656429   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:26:11.656605   36531 main.go:141] libmachine: Using SSH client type: native
	I0729 17:26:11.656767   36531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:26:11.656783   36531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-900414' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-900414/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-900414' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:26:11.763650   36531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:26:11.763673   36531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 17:26:11.763696   36531 buildroot.go:174] setting up certificates
	I0729 17:26:11.763707   36531 provision.go:84] configureAuth start
	I0729 17:26:11.763719   36531 main.go:141] libmachine: (ha-900414) Calling .GetMachineName
	I0729 17:26:11.763949   36531 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:26:11.766441   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.766847   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:26:11.766875   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.767013   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:26:11.769103   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.769432   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:26:11.769457   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.769548   36531 provision.go:143] copyHostCerts
	I0729 17:26:11.769580   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:26:11.769620   36531 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 17:26:11.769630   36531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:26:11.769700   36531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 17:26:11.769797   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:26:11.769818   36531 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 17:26:11.769828   36531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:26:11.769858   36531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 17:26:11.769927   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:26:11.769946   36531 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 17:26:11.769953   36531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:26:11.769978   36531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 17:26:11.770048   36531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.ha-900414 san=[127.0.0.1 192.168.39.114 ha-900414 localhost minikube]
	I0729 17:26:11.896044   36531 provision.go:177] copyRemoteCerts
	I0729 17:26:11.896116   36531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:26:11.896137   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:26:11.898609   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.899074   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:26:11.899103   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.899307   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:26:11.899509   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:11.899654   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:26:11.899774   36531 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:26:11.984842   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:26:11.984905   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:26:12.015181   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:26:12.015250   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 17:26:12.043085   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:26:12.043135   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 17:26:12.070247   36531 provision.go:87] duration metric: took 306.52942ms to configureAuth
	I0729 17:26:12.070271   36531 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:26:12.070505   36531 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:26:12.070578   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:26:12.073032   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:12.073435   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:26:12.073463   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:12.073652   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:26:12.073817   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:12.073991   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:12.074122   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:26:12.074290   36531 main.go:141] libmachine: Using SSH client type: native
	I0729 17:26:12.074477   36531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:26:12.074492   36531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:27:42.966377   36531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:27:42.966404   36531 machine.go:97] duration metric: took 1m31.556076614s to provisionDockerMachine
	I0729 17:27:42.966439   36531 start.go:293] postStartSetup for "ha-900414" (driver="kvm2")
	I0729 17:27:42.966454   36531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:27:42.966481   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:27:42.966744   36531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:27:42.966770   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:27:42.969405   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:42.969782   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:27:42.969806   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:42.969913   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:27:42.970111   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:27:42.970279   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:27:42.970429   36531 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:27:43.053530   36531 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:27:43.057880   36531 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:27:43.057902   36531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 17:27:43.057975   36531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 17:27:43.058048   36531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 17:27:43.058058   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /etc/ssl/certs/183932.pem
	I0729 17:27:43.058172   36531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:27:43.067925   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:27:43.090884   36531 start.go:296] duration metric: took 124.432235ms for postStartSetup
	I0729 17:27:43.090973   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:27:43.091266   36531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 17:27:43.091293   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:27:43.093776   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.094125   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:27:43.094158   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.094338   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:27:43.094510   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:27:43.094646   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:27:43.094822   36531 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	W0729 17:27:43.176156   36531 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 17:27:43.176177   36531 fix.go:56] duration metric: took 1m31.785620665s for fixHost
	I0729 17:27:43.176196   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:27:43.178680   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.178989   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:27:43.179008   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.179195   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:27:43.179348   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:27:43.179496   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:27:43.179607   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:27:43.179715   36531 main.go:141] libmachine: Using SSH client type: native
	I0729 17:27:43.179874   36531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:27:43.179891   36531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:27:43.287111   36531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722274063.242453168
	
	I0729 17:27:43.287128   36531 fix.go:216] guest clock: 1722274063.242453168
	I0729 17:27:43.287141   36531 fix.go:229] Guest: 2024-07-29 17:27:43.242453168 +0000 UTC Remote: 2024-07-29 17:27:43.176184223 +0000 UTC m=+91.904326746 (delta=66.268945ms)
	I0729 17:27:43.287165   36531 fix.go:200] guest clock delta is within tolerance: 66.268945ms
	I0729 17:27:43.287171   36531 start.go:83] releasing machines lock for "ha-900414", held for 1m31.896626884s
	I0729 17:27:43.287189   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:27:43.287432   36531 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:27:43.289659   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.290002   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:27:43.290026   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.290169   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:27:43.290662   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:27:43.290821   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:27:43.290926   36531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:27:43.290970   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:27:43.290983   36531 ssh_runner.go:195] Run: cat /version.json
	I0729 17:27:43.291002   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:27:43.293399   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.293667   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.293744   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:27:43.293767   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.293903   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:27:43.294031   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:27:43.294048   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:27:43.294058   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.294214   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:27:43.294231   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:27:43.294425   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:27:43.294418   36531 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:27:43.294574   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:27:43.294791   36531 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:27:43.370956   36531 ssh_runner.go:195] Run: systemctl --version
	I0729 17:27:43.394450   36531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:27:43.556346   36531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:27:43.562410   36531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:27:43.562474   36531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:27:43.571883   36531 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 17:27:43.571908   36531 start.go:495] detecting cgroup driver to use...
	I0729 17:27:43.571998   36531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:27:43.588205   36531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:27:43.601659   36531 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:27:43.601704   36531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:27:43.615321   36531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:27:43.628123   36531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:27:43.783170   36531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:27:43.934601   36531 docker.go:233] disabling docker service ...
	I0729 17:27:43.934662   36531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:27:43.952907   36531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:27:43.966008   36531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:27:44.106465   36531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:27:44.251272   36531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:27:44.267168   36531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:27:44.286018   36531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:27:44.286072   36531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:27:44.296260   36531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:27:44.296315   36531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:27:44.306232   36531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:27:44.315844   36531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:27:44.325970   36531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:27:44.336146   36531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:27:44.346682   36531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:27:44.358201   36531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:27:44.368533   36531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:27:44.377631   36531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:27:44.386655   36531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:27:44.530166   36531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:27:44.822545   36531 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:27:44.822609   36531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:27:44.827763   36531 start.go:563] Will wait 60s for crictl version
	I0729 17:27:44.827813   36531 ssh_runner.go:195] Run: which crictl
	I0729 17:27:44.831550   36531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:27:44.870913   36531 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:27:44.870994   36531 ssh_runner.go:195] Run: crio --version
	I0729 17:27:44.899236   36531 ssh_runner.go:195] Run: crio --version
	I0729 17:27:44.929663   36531 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:27:44.930956   36531 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:27:44.933324   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:44.933719   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:27:44.933740   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:44.933940   36531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:27:44.938459   36531 kubeadm.go:883] updating cluster {Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.156 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 17:27:44.938581   36531 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:27:44.938623   36531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:27:44.982323   36531 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 17:27:44.982343   36531 crio.go:433] Images already preloaded, skipping extraction
	I0729 17:27:44.982405   36531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:27:45.017312   36531 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 17:27:45.017331   36531 cache_images.go:84] Images are preloaded, skipping loading
	I0729 17:27:45.017339   36531 kubeadm.go:934] updating node { 192.168.39.114 8443 v1.30.3 crio true true} ...
	I0729 17:27:45.017429   36531 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-900414 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:27:45.017492   36531 ssh_runner.go:195] Run: crio config
	I0729 17:27:45.075046   36531 cni.go:84] Creating CNI manager for ""
	I0729 17:27:45.075062   36531 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 17:27:45.075071   36531 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 17:27:45.075090   36531 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-900414 NodeName:ha-900414 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 17:27:45.075210   36531 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-900414"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 17:27:45.075230   36531 kube-vip.go:115] generating kube-vip config ...
	I0729 17:27:45.075264   36531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 17:27:45.086751   36531 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 17:27:45.086841   36531 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 17:27:45.086897   36531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:27:45.096373   36531 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 17:27:45.096432   36531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 17:27:45.105323   36531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 17:27:45.121284   36531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:27:45.137139   36531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 17:27:45.153053   36531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 17:27:45.169479   36531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 17:27:45.173779   36531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:27:45.335745   36531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:27:45.350140   36531 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414 for IP: 192.168.39.114
	I0729 17:27:45.350161   36531 certs.go:194] generating shared ca certs ...
	I0729 17:27:45.350174   36531 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:27:45.350299   36531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 17:27:45.350336   36531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 17:27:45.350345   36531 certs.go:256] generating profile certs ...
	I0729 17:27:45.350465   36531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key
	I0729 17:27:45.350500   36531 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.6e6af526
	I0729 17:27:45.350517   36531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.6e6af526 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.111 192.168.39.6 192.168.39.254]
	I0729 17:27:45.444770   36531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.6e6af526 ...
	I0729 17:27:45.444798   36531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.6e6af526: {Name:mkace41337feac31813a88006368e7446bce771b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:27:45.444983   36531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.6e6af526 ...
	I0729 17:27:45.444996   36531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.6e6af526: {Name:mkd9b548f7c68524ec9eba7eee5f22bf0d008e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:27:45.445092   36531 certs.go:381] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.6e6af526 -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt
	I0729 17:27:45.445252   36531 certs.go:385] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.6e6af526 -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key
	I0729 17:27:45.445393   36531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key
	I0729 17:27:45.445408   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 17:27:45.445424   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 17:27:45.445440   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 17:27:45.445456   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 17:27:45.445472   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 17:27:45.445486   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 17:27:45.445504   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 17:27:45.445521   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 17:27:45.445580   36531 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 17:27:45.445627   36531 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 17:27:45.445637   36531 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:27:45.445671   36531 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:27:45.445710   36531 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:27:45.445739   36531 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 17:27:45.445780   36531 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:27:45.445809   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem -> /usr/share/ca-certificates/18393.pem
	I0729 17:27:45.445833   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /usr/share/ca-certificates/183932.pem
	I0729 17:27:45.445848   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:27:45.446485   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:27:45.472397   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:27:45.495890   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:27:45.519653   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:27:45.542761   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 17:27:45.565458   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 17:27:45.588115   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:27:45.611695   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:27:45.634924   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 17:27:45.657552   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 17:27:45.680733   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:27:45.704067   36531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 17:27:45.720799   36531 ssh_runner.go:195] Run: openssl version
	I0729 17:27:45.726636   36531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 17:27:45.737314   36531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 17:27:45.741720   36531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 17:27:45.741761   36531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 17:27:45.747354   36531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 17:27:45.756749   36531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 17:27:45.767582   36531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 17:27:45.771948   36531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 17:27:45.771995   36531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 17:27:45.777893   36531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:27:45.788034   36531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:27:45.798900   36531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:27:45.803502   36531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:27:45.803549   36531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:27:45.809298   36531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:27:45.818811   36531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:27:45.823291   36531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 17:27:45.828935   36531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 17:27:45.834533   36531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 17:27:45.839895   36531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 17:27:45.845210   36531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 17:27:45.850553   36531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 17:27:45.855890   36531 kubeadm.go:392] StartCluster: {Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.156 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:27:45.855999   36531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 17:27:45.856053   36531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 17:27:45.893498   36531 cri.go:89] found id: "6e723c54d812a5be7eb854af81bf345c7429c93d2037e5c4ff4c3a74a418fae1"
	I0729 17:27:45.893522   36531 cri.go:89] found id: "e461901feb4562fc8c70f08a274772296164ae97b7fe43bb6573f9fe43ed0503"
	I0729 17:27:45.893526   36531 cri.go:89] found id: "e4d3e21e2fdd017f698d1d9d2ba208122c495a9c6273542bd16759ffc40e16a1"
	I0729 17:27:45.893529   36531 cri.go:89] found id: "7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127"
	I0729 17:27:45.893532   36531 cri.go:89] found id: "911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2"
	I0729 17:27:45.893534   36531 cri.go:89] found id: "b419192dc8add024f08c798a5f50d7c6bd2ee0ae8a2280771508aebc78e20217"
	I0729 17:27:45.893537   36531 cri.go:89] found id: "10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38"
	I0729 17:27:45.893539   36531 cri.go:89] found id: "37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9"
	I0729 17:27:45.893542   36531 cri.go:89] found id: "426b48b0fdbff14ce36fc0396074186cbd51533c984e6fac5f3f963bce611059"
	I0729 17:27:45.893546   36531 cri.go:89] found id: "a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf"
	I0729 17:27:45.893551   36531 cri.go:89] found id: "2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b"
	I0729 17:27:45.893554   36531 cri.go:89] found id: "270db6978c4e4bce98a1f424ce50f66507840c818ab639d9ef02e8f96bab41d6"
	I0729 17:27:45.893556   36531 cri.go:89] found id: "dd71b5556931becb81321807072e3a8100ce3344e4dea3237c6918a6c8e98cc5"
	I0729 17:27:45.893558   36531 cri.go:89] found id: ""
	I0729 17:27:45.893614   36531 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 17:30:04 ha-900414 crio[3755]: time="2024-07-29 17:30:04.939800302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722274204939777690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe428be9-ff6b-4aec-a089-1cde0d3b653a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:30:04 ha-900414 crio[3755]: time="2024-07-29 17:30:04.940324359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05c7e1a4-4059-4671-99c8-645ff2872d77 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:30:04 ha-900414 crio[3755]: time="2024-07-29 17:30:04.940385449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05c7e1a4-4059-4671-99c8-645ff2872d77 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:30:04 ha-900414 crio[3755]: time="2024-07-29 17:30:04.941030389Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c5a0872090b8a9407f0e240df38b013e8c3038b5e47a8999d2dcfe1a6a847b2,PodSandboxId:c007a83285af87cc3a8f7decb0251012dde13811d97b57d08206a31427d5c3a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722274110760146881,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:824354c7c16e9e8af404495001dc5d595a7cbc026c918156cf68ed850d7c19e8,PodSandboxId:cef075745bd3e13fa4553a22e0e8749baa40e9fecf2b65e77c60ddf312c1ed5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722274109764609958,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237f4f9a22aab5537900193f82504f315f0a18522b4ec147a18810a7207e9d03,PodSandboxId:0104a45e7598e30bdbc9acd393e17b4589234866ffc4e4c6f0deb5f2b179f696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722274108761219032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6e4cdaa36f1850f6ade39480251a5593c0369b0410bccb53957d0d832f43e7,PodSandboxId:d12022815679b303c38ea086e2a76f39ec3d47f7ae40b748cf15ab6bb1fe964e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722274105028099226,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annotations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97477efe48afec6ed188f1d10e74857abdbc4a945712c05e01ae55c1f48fb38d,PodSandboxId:58018117a81ff794019dc4556482350f7196d6ed56994875b706a1aa9e5d434d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722274086072630131,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e529ef1ae527634e8684df95d99942df,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fcbfdd88d8ba74c5d1c8a633f19a95728f6d2649e20c14873f44a9a83cb5fb,PodSandboxId:c007a83285af87cc3a8f7decb0251012dde13811d97b57d08206a31427d5c3a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722274072296491519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74b6fc801cbb9e29df38fb778b6af9db3ab8950818cd18c03383e749fc4190a,PodSandboxId:d8ce74c22f4e2c9339fc50e8b032b204d4c56cce3332942283790ff88fa71d3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722274071856460173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:478caa49a9e2e5f11ac1dfc8c6e870c29f3045b994c601e6cd646952b9c0de2f,PodSandboxId:e37d55989869ca3baab53df8ebf4b721d60ede534281c48e3ff8c5b14c8d46b5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722274071924188035,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a269a3
6cb51b460d86ff16520ffadbbd810606203c8a998252609f5a40452b,PodSandboxId:a863620823cc108fa12a70ef138b83fca0cf1c2f8e92863ae0a3b769dbf738d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274071976348453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96142230a5d1570c505a5127e3d4b0025e6c120c808eec6b1579291d9de14bb9,PodSandboxId:cef075745bd3e13fa4553a22e0e8749baa40e9fecf2b65e77c60ddf312c1ed5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722274071742708802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d199d063086572e53c3056fe3c36873a53cd3fe0bc1d0c796366f2c85d8b47,PodSandboxId:48bcddc3f018c08cbdf46edb4557396d8b25dfe6016f192534afa0c5a51328a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722274071709164105,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e76e9ddc1762b284dde16c55310d2e5005380c7b7522aa4c92d164afc32b292,PodSandboxId:528c72454cdd017233d33e8fd2f875f1ca0d26df629c50c50451e523df5851c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274071702640533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b08de2e88b41b61b1403daba370b669abfd1acee1793945733079da7004a6e,PodSandboxId:f69bbf70d4ce08eb7b420abc1362fb6f668ff031bb536ef59f5248de912ee3fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722274071580225499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a3
2c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d511eeca56c8ff8ccbb88e762a12ef7258f1c2175101320dda0553e82887c297,PodSandboxId:0104a45e7598e30bdbc9acd393e17b4589234866ffc4e4c6f0deb5f2b179f696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722274071565576136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174e5d31268c70a798e1fa1fe5d2845d98eaed228a11b55810b7ca4680256a8e,PodSandboxId:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722273570293260741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annot
ations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127,PodSandboxId:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722273433299033423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kube
rnetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2,PodSandboxId:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722273433248658511,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38,PodSandboxId:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722273421363258619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9,PodSandboxId:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722273417715030224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf,PodSandboxId:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722273397639748438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b,PodSandboxId:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722273397620067180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05c7e1a4-4059-4671-99c8-645ff2872d77 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:30:04 ha-900414 crio[3755]: time="2024-07-29 17:30:04.992739038Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69c7c1cc-11fc-4afd-b6f8-11ee8705435d name=/runtime.v1.RuntimeService/Version
	Jul 29 17:30:04 ha-900414 crio[3755]: time="2024-07-29 17:30:04.992852332Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69c7c1cc-11fc-4afd-b6f8-11ee8705435d name=/runtime.v1.RuntimeService/Version
	Jul 29 17:30:04 ha-900414 crio[3755]: time="2024-07-29 17:30:04.994057429Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2bf17bc6-1f77-46c7-84f3-205e07363898 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:30:04 ha-900414 crio[3755]: time="2024-07-29 17:30:04.994570489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722274204994544197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2bf17bc6-1f77-46c7-84f3-205e07363898 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:30:04 ha-900414 crio[3755]: time="2024-07-29 17:30:04.995035300Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b8958ca-2dc7-404a-997b-6da05d53e4bf name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:30:04 ha-900414 crio[3755]: time="2024-07-29 17:30:04.995111238Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b8958ca-2dc7-404a-997b-6da05d53e4bf name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:30:04 ha-900414 crio[3755]: time="2024-07-29 17:30:04.996444048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c5a0872090b8a9407f0e240df38b013e8c3038b5e47a8999d2dcfe1a6a847b2,PodSandboxId:c007a83285af87cc3a8f7decb0251012dde13811d97b57d08206a31427d5c3a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722274110760146881,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:824354c7c16e9e8af404495001dc5d595a7cbc026c918156cf68ed850d7c19e8,PodSandboxId:cef075745bd3e13fa4553a22e0e8749baa40e9fecf2b65e77c60ddf312c1ed5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722274109764609958,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237f4f9a22aab5537900193f82504f315f0a18522b4ec147a18810a7207e9d03,PodSandboxId:0104a45e7598e30bdbc9acd393e17b4589234866ffc4e4c6f0deb5f2b179f696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722274108761219032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6e4cdaa36f1850f6ade39480251a5593c0369b0410bccb53957d0d832f43e7,PodSandboxId:d12022815679b303c38ea086e2a76f39ec3d47f7ae40b748cf15ab6bb1fe964e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722274105028099226,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annotations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97477efe48afec6ed188f1d10e74857abdbc4a945712c05e01ae55c1f48fb38d,PodSandboxId:58018117a81ff794019dc4556482350f7196d6ed56994875b706a1aa9e5d434d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722274086072630131,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e529ef1ae527634e8684df95d99942df,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fcbfdd88d8ba74c5d1c8a633f19a95728f6d2649e20c14873f44a9a83cb5fb,PodSandboxId:c007a83285af87cc3a8f7decb0251012dde13811d97b57d08206a31427d5c3a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722274072296491519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74b6fc801cbb9e29df38fb778b6af9db3ab8950818cd18c03383e749fc4190a,PodSandboxId:d8ce74c22f4e2c9339fc50e8b032b204d4c56cce3332942283790ff88fa71d3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722274071856460173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:478caa49a9e2e5f11ac1dfc8c6e870c29f3045b994c601e6cd646952b9c0de2f,PodSandboxId:e37d55989869ca3baab53df8ebf4b721d60ede534281c48e3ff8c5b14c8d46b5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722274071924188035,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a269a3
6cb51b460d86ff16520ffadbbd810606203c8a998252609f5a40452b,PodSandboxId:a863620823cc108fa12a70ef138b83fca0cf1c2f8e92863ae0a3b769dbf738d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274071976348453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96142230a5d1570c505a5127e3d4b0025e6c120c808eec6b1579291d9de14bb9,PodSandboxId:cef075745bd3e13fa4553a22e0e8749baa40e9fecf2b65e77c60ddf312c1ed5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722274071742708802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d199d063086572e53c3056fe3c36873a53cd3fe0bc1d0c796366f2c85d8b47,PodSandboxId:48bcddc3f018c08cbdf46edb4557396d8b25dfe6016f192534afa0c5a51328a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722274071709164105,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e76e9ddc1762b284dde16c55310d2e5005380c7b7522aa4c92d164afc32b292,PodSandboxId:528c72454cdd017233d33e8fd2f875f1ca0d26df629c50c50451e523df5851c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274071702640533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b08de2e88b41b61b1403daba370b669abfd1acee1793945733079da7004a6e,PodSandboxId:f69bbf70d4ce08eb7b420abc1362fb6f668ff031bb536ef59f5248de912ee3fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722274071580225499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a3
2c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d511eeca56c8ff8ccbb88e762a12ef7258f1c2175101320dda0553e82887c297,PodSandboxId:0104a45e7598e30bdbc9acd393e17b4589234866ffc4e4c6f0deb5f2b179f696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722274071565576136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174e5d31268c70a798e1fa1fe5d2845d98eaed228a11b55810b7ca4680256a8e,PodSandboxId:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722273570293260741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annot
ations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127,PodSandboxId:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722273433299033423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kube
rnetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2,PodSandboxId:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722273433248658511,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38,PodSandboxId:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722273421363258619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9,PodSandboxId:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722273417715030224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf,PodSandboxId:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722273397639748438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b,PodSandboxId:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722273397620067180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b8958ca-2dc7-404a-997b-6da05d53e4bf name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:30:05 ha-900414 crio[3755]: time="2024-07-29 17:30:05.041389906Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=064d67cc-ad85-4a40-9a07-5bd01efffce3 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:30:05 ha-900414 crio[3755]: time="2024-07-29 17:30:05.041466336Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=064d67cc-ad85-4a40-9a07-5bd01efffce3 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:30:05 ha-900414 crio[3755]: time="2024-07-29 17:30:05.043960289Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee5fc8ec-6170-474b-8e65-02cbfc4cad11 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:30:05 ha-900414 crio[3755]: time="2024-07-29 17:30:05.044406344Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722274205044382364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee5fc8ec-6170-474b-8e65-02cbfc4cad11 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:30:05 ha-900414 crio[3755]: time="2024-07-29 17:30:05.044879690Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c4eaed6-468d-4918-ba0d-b8efff7dc96f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:30:05 ha-900414 crio[3755]: time="2024-07-29 17:30:05.044976312Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c4eaed6-468d-4918-ba0d-b8efff7dc96f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:30:05 ha-900414 crio[3755]: time="2024-07-29 17:30:05.045459532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c5a0872090b8a9407f0e240df38b013e8c3038b5e47a8999d2dcfe1a6a847b2,PodSandboxId:c007a83285af87cc3a8f7decb0251012dde13811d97b57d08206a31427d5c3a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722274110760146881,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:824354c7c16e9e8af404495001dc5d595a7cbc026c918156cf68ed850d7c19e8,PodSandboxId:cef075745bd3e13fa4553a22e0e8749baa40e9fecf2b65e77c60ddf312c1ed5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722274109764609958,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237f4f9a22aab5537900193f82504f315f0a18522b4ec147a18810a7207e9d03,PodSandboxId:0104a45e7598e30bdbc9acd393e17b4589234866ffc4e4c6f0deb5f2b179f696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722274108761219032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6e4cdaa36f1850f6ade39480251a5593c0369b0410bccb53957d0d832f43e7,PodSandboxId:d12022815679b303c38ea086e2a76f39ec3d47f7ae40b748cf15ab6bb1fe964e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722274105028099226,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annotations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97477efe48afec6ed188f1d10e74857abdbc4a945712c05e01ae55c1f48fb38d,PodSandboxId:58018117a81ff794019dc4556482350f7196d6ed56994875b706a1aa9e5d434d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722274086072630131,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e529ef1ae527634e8684df95d99942df,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fcbfdd88d8ba74c5d1c8a633f19a95728f6d2649e20c14873f44a9a83cb5fb,PodSandboxId:c007a83285af87cc3a8f7decb0251012dde13811d97b57d08206a31427d5c3a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722274072296491519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74b6fc801cbb9e29df38fb778b6af9db3ab8950818cd18c03383e749fc4190a,PodSandboxId:d8ce74c22f4e2c9339fc50e8b032b204d4c56cce3332942283790ff88fa71d3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722274071856460173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:478caa49a9e2e5f11ac1dfc8c6e870c29f3045b994c601e6cd646952b9c0de2f,PodSandboxId:e37d55989869ca3baab53df8ebf4b721d60ede534281c48e3ff8c5b14c8d46b5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722274071924188035,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a269a3
6cb51b460d86ff16520ffadbbd810606203c8a998252609f5a40452b,PodSandboxId:a863620823cc108fa12a70ef138b83fca0cf1c2f8e92863ae0a3b769dbf738d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274071976348453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96142230a5d1570c505a5127e3d4b0025e6c120c808eec6b1579291d9de14bb9,PodSandboxId:cef075745bd3e13fa4553a22e0e8749baa40e9fecf2b65e77c60ddf312c1ed5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722274071742708802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d199d063086572e53c3056fe3c36873a53cd3fe0bc1d0c796366f2c85d8b47,PodSandboxId:48bcddc3f018c08cbdf46edb4557396d8b25dfe6016f192534afa0c5a51328a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722274071709164105,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e76e9ddc1762b284dde16c55310d2e5005380c7b7522aa4c92d164afc32b292,PodSandboxId:528c72454cdd017233d33e8fd2f875f1ca0d26df629c50c50451e523df5851c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274071702640533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b08de2e88b41b61b1403daba370b669abfd1acee1793945733079da7004a6e,PodSandboxId:f69bbf70d4ce08eb7b420abc1362fb6f668ff031bb536ef59f5248de912ee3fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722274071580225499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a3
2c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d511eeca56c8ff8ccbb88e762a12ef7258f1c2175101320dda0553e82887c297,PodSandboxId:0104a45e7598e30bdbc9acd393e17b4589234866ffc4e4c6f0deb5f2b179f696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722274071565576136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174e5d31268c70a798e1fa1fe5d2845d98eaed228a11b55810b7ca4680256a8e,PodSandboxId:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722273570293260741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annot
ations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127,PodSandboxId:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722273433299033423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kube
rnetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2,PodSandboxId:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722273433248658511,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38,PodSandboxId:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722273421363258619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9,PodSandboxId:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722273417715030224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf,PodSandboxId:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722273397639748438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b,PodSandboxId:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722273397620067180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c4eaed6-468d-4918-ba0d-b8efff7dc96f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:30:05 ha-900414 crio[3755]: time="2024-07-29 17:30:05.099026651Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b78ebc92-f657-43ff-9dd2-2aa6fd8cc051 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:30:05 ha-900414 crio[3755]: time="2024-07-29 17:30:05.099131071Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b78ebc92-f657-43ff-9dd2-2aa6fd8cc051 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:30:05 ha-900414 crio[3755]: time="2024-07-29 17:30:05.100278407Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=747d8720-e511-4a8e-a4cd-bdf7bf788c45 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:30:05 ha-900414 crio[3755]: time="2024-07-29 17:30:05.100708616Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722274205100685356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=747d8720-e511-4a8e-a4cd-bdf7bf788c45 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:30:05 ha-900414 crio[3755]: time="2024-07-29 17:30:05.101307150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e7c4eff-526d-414c-902c-7faa42473e2e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:30:05 ha-900414 crio[3755]: time="2024-07-29 17:30:05.101364462Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e7c4eff-526d-414c-902c-7faa42473e2e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:30:05 ha-900414 crio[3755]: time="2024-07-29 17:30:05.101811314Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c5a0872090b8a9407f0e240df38b013e8c3038b5e47a8999d2dcfe1a6a847b2,PodSandboxId:c007a83285af87cc3a8f7decb0251012dde13811d97b57d08206a31427d5c3a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722274110760146881,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:824354c7c16e9e8af404495001dc5d595a7cbc026c918156cf68ed850d7c19e8,PodSandboxId:cef075745bd3e13fa4553a22e0e8749baa40e9fecf2b65e77c60ddf312c1ed5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722274109764609958,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237f4f9a22aab5537900193f82504f315f0a18522b4ec147a18810a7207e9d03,PodSandboxId:0104a45e7598e30bdbc9acd393e17b4589234866ffc4e4c6f0deb5f2b179f696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722274108761219032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6e4cdaa36f1850f6ade39480251a5593c0369b0410bccb53957d0d832f43e7,PodSandboxId:d12022815679b303c38ea086e2a76f39ec3d47f7ae40b748cf15ab6bb1fe964e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722274105028099226,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annotations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97477efe48afec6ed188f1d10e74857abdbc4a945712c05e01ae55c1f48fb38d,PodSandboxId:58018117a81ff794019dc4556482350f7196d6ed56994875b706a1aa9e5d434d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722274086072630131,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e529ef1ae527634e8684df95d99942df,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fcbfdd88d8ba74c5d1c8a633f19a95728f6d2649e20c14873f44a9a83cb5fb,PodSandboxId:c007a83285af87cc3a8f7decb0251012dde13811d97b57d08206a31427d5c3a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722274072296491519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74b6fc801cbb9e29df38fb778b6af9db3ab8950818cd18c03383e749fc4190a,PodSandboxId:d8ce74c22f4e2c9339fc50e8b032b204d4c56cce3332942283790ff88fa71d3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722274071856460173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:478caa49a9e2e5f11ac1dfc8c6e870c29f3045b994c601e6cd646952b9c0de2f,PodSandboxId:e37d55989869ca3baab53df8ebf4b721d60ede534281c48e3ff8c5b14c8d46b5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722274071924188035,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a269a3
6cb51b460d86ff16520ffadbbd810606203c8a998252609f5a40452b,PodSandboxId:a863620823cc108fa12a70ef138b83fca0cf1c2f8e92863ae0a3b769dbf738d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274071976348453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96142230a5d1570c505a5127e3d4b0025e6c120c808eec6b1579291d9de14bb9,PodSandboxId:cef075745bd3e13fa4553a22e0e8749baa40e9fecf2b65e77c60ddf312c1ed5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722274071742708802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d199d063086572e53c3056fe3c36873a53cd3fe0bc1d0c796366f2c85d8b47,PodSandboxId:48bcddc3f018c08cbdf46edb4557396d8b25dfe6016f192534afa0c5a51328a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722274071709164105,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e76e9ddc1762b284dde16c55310d2e5005380c7b7522aa4c92d164afc32b292,PodSandboxId:528c72454cdd017233d33e8fd2f875f1ca0d26df629c50c50451e523df5851c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274071702640533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b08de2e88b41b61b1403daba370b669abfd1acee1793945733079da7004a6e,PodSandboxId:f69bbf70d4ce08eb7b420abc1362fb6f668ff031bb536ef59f5248de912ee3fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722274071580225499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a3
2c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d511eeca56c8ff8ccbb88e762a12ef7258f1c2175101320dda0553e82887c297,PodSandboxId:0104a45e7598e30bdbc9acd393e17b4589234866ffc4e4c6f0deb5f2b179f696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722274071565576136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174e5d31268c70a798e1fa1fe5d2845d98eaed228a11b55810b7ca4680256a8e,PodSandboxId:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722273570293260741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annot
ations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127,PodSandboxId:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722273433299033423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kube
rnetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2,PodSandboxId:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722273433248658511,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38,PodSandboxId:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722273421363258619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9,PodSandboxId:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722273417715030224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf,PodSandboxId:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722273397639748438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b,PodSandboxId:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722273397620067180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e7c4eff-526d-414c-902c-7faa42473e2e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4c5a0872090b8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       3                   c007a83285af8       storage-provisioner
	824354c7c16e9       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   cef075745bd3e       kube-controller-manager-ha-900414
	237f4f9a22aab       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   0104a45e7598e       kube-apiserver-ha-900414
	4b6e4cdaa36f1       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   d12022815679b       busybox-fc5497c4f-4fv4t
	97477efe48afe       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      About a minute ago   Running             kube-vip                  0                   58018117a81ff       kube-vip-ha-900414
	67fcbfdd88d8b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       2                   c007a83285af8       storage-provisioner
	a1a269a36cb51       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   a863620823cc1       coredns-7db6d8ff4d-48j6w
	478caa49a9e2e       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   e37d55989869c       kindnet-z9cvz
	f74b6fc801cbb       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   d8ce74c22f4e2       kube-proxy-tng4t
	96142230a5d15       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   cef075745bd3e       kube-controller-manager-ha-900414
	96d199d063086       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   48bcddc3f018c       etcd-ha-900414
	5e76e9ddc1762       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   528c72454cdd0       coredns-7db6d8ff4d-9r87x
	c4b08de2e88b4       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   f69bbf70d4ce0       kube-scheduler-ha-900414
	d511eeca56c8f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   0104a45e7598e       kube-apiserver-ha-900414
	174e5d31268c7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   7d2a64a5bcccd       busybox-fc5497c4f-4fv4t
	7d7ffaf9ef2fd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   7a0bb58ad2b90       coredns-7db6d8ff4d-9r87x
	911569fe2373d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   f47facc78da61       coredns-7db6d8ff4d-48j6w
	10b182b72bc50       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   30715fa1b9f02       kindnet-z9cvz
	37ef29620e9c9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   250f31f0996e1       kube-proxy-tng4t
	a7721018288f9       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago       Exited              kube-scheduler            0                   e2a054b42822a       kube-scheduler-ha-900414
	2a27f5a54bd43       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   46030b1ba43cf       etcd-ha-900414
	
	
	==> coredns [5e76e9ddc1762b284dde16c55310d2e5005380c7b7522aa4c92d164afc32b292] <==
	[INFO] plugin/kubernetes: Trace[920478214]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 17:27:53.778) (total time: 10001ms):
	Trace[920478214]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:28:03.779)
	Trace[920478214]: [10.001716395s] [10.001716395s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:41066->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:41066->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37802->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37802->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37800->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1146101281]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 17:28:06.495) (total time: 10266ms):
	Trace[1146101281]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37800->10.96.0.1:443: read: connection reset by peer 10266ms (17:28:16.761)
	Trace[1146101281]: [10.266321597s] [10.266321597s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37800->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127] <==
	[INFO] 10.244.1.2:35645 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001855497s
	[INFO] 10.244.1.2:43192 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177303s
	[INFO] 10.244.1.2:33281 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160789s
	[INFO] 10.244.1.2:57013 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097416s
	[INFO] 10.244.2.2:38166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136029s
	[INFO] 10.244.2.2:33640 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001913014s
	[INFO] 10.244.2.2:47485 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104905s
	[INFO] 10.244.2.2:45778 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170534s
	[INFO] 10.244.2.2:59234 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076101s
	[INFO] 10.244.0.4:50535 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065536s
	[INFO] 10.244.1.2:58622 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133396s
	[INFO] 10.244.1.2:33438 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102338s
	[INFO] 10.244.2.2:45926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000383812s
	[INFO] 10.244.2.2:56980 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000187545s
	[INFO] 10.244.2.2:43137 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00016801s
	[INFO] 10.244.0.4:57612 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000159389s
	[INFO] 10.244.1.2:58047 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014126s
	[INFO] 10.244.1.2:45045 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123813s
	[INFO] 10.244.2.2:35311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173973s
	[INFO] 10.244.2.2:47044 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140928s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2] <==
	[INFO] 10.244.0.4:43001 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165887s
	[INFO] 10.244.1.2:43677 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128129s
	[INFO] 10.244.1.2:39513 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001354968s
	[INFO] 10.244.1.2:52828 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183362s
	[INFO] 10.244.2.2:51403 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116578s
	[INFO] 10.244.2.2:47706 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001162998s
	[INFO] 10.244.2.2:39349 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083497s
	[INFO] 10.244.0.4:43643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164666s
	[INFO] 10.244.0.4:51941 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067553s
	[INFO] 10.244.0.4:33186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117492s
	[INFO] 10.244.1.2:36002 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170421s
	[INFO] 10.244.1.2:41186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135424s
	[INFO] 10.244.2.2:40469 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015464s
	[INFO] 10.244.0.4:58750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131521s
	[INFO] 10.244.0.4:59782 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141641s
	[INFO] 10.244.0.4:47289 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000189592s
	[INFO] 10.244.1.2:44743 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121922s
	[INFO] 10.244.1.2:60901 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099491s
	[INFO] 10.244.2.2:53612 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000143831s
	[INFO] 10.244.2.2:35693 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120049s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a1a269a36cb51b460d86ff16520ffadbbd810606203c8a998252609f5a40452b] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1545848923]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 17:28:00.396) (total time: 10001ms):
	Trace[1545848923]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:28:10.397)
	Trace[1545848923]: [10.001306039s] [10.001306039s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52238->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52238->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-900414
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-900414
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=ha-900414
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T17_16_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:16:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-900414
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:29:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:28:31 +0000   Mon, 29 Jul 2024 17:16:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:28:31 +0000   Mon, 29 Jul 2024 17:16:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:28:31 +0000   Mon, 29 Jul 2024 17:16:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:28:31 +0000   Mon, 29 Jul 2024 17:17:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    ha-900414
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0301ef966ab4d039cde4e4959e83ea6
	  System UUID:                d0301ef9-66ab-4d03-9cde-4e4959e83ea6
	  Boot ID:                    ea7d1983-2f49-4874-b67f-d8eea13c27d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4fv4t              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-48j6w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-9r87x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-900414                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-z9cvz                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-900414             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-900414    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-tng4t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-900414             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-900414                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 92s                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-900414 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-900414 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-900414 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-900414 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	  Warning  ContainerGCFailed        2m22s (x2 over 3m22s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           83s                    node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	  Normal   RegisteredNode           82s                    node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	  Normal   RegisteredNode           19s                    node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	
	
	Name:               ha-900414-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-900414-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=ha-900414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_17_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:17:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-900414-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:29:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:28:47 +0000   Mon, 29 Jul 2024 17:28:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:28:47 +0000   Mon, 29 Jul 2024 17:28:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:28:47 +0000   Mon, 29 Jul 2024 17:28:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:28:47 +0000   Mon, 29 Jul 2024 17:28:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    ha-900414-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 854b5d80a28944e1a0d7e90a65ef964f
	  System UUID:                854b5d80-a289-44e1-a0d7-e90a65ef964f
	  Boot ID:                    4c2eb12b-96bb-4615-a8c3-460c54019d18
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dqz55                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-900414-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-kdzhk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-900414-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-900414-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-bgq99                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-900414-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-900414-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                  node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-900414-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-900414-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-900414-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                  node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	  Normal  NodeNotReady             8m39s                node-controller  Node ha-900414-m02 status is now: NodeNotReady
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node ha-900414-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node ha-900414-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x7 over 116s)  kubelet          Node ha-900414-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  116s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           83s                  node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	  Normal  RegisteredNode           82s                  node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	  Normal  RegisteredNode           19s                  node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	
	
	Name:               ha-900414-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-900414-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=ha-900414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_19_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:19:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-900414-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:29:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:29:46 +0000   Mon, 29 Jul 2024 17:19:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:29:46 +0000   Mon, 29 Jul 2024 17:19:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:29:46 +0000   Mon, 29 Jul 2024 17:19:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:29:46 +0000   Mon, 29 Jul 2024 17:19:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    ha-900414-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a83fa48485e44a66899d03b0bc3026ab
	  System UUID:                a83fa484-85e4-4a66-899d-03b0bc3026ab
	  Boot ID:                    f111beef-5fa6-45b3-ac23-0df888c08edb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s9sz8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-900414-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-6vzd2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-900414-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-900414-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-wnfsb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-900414-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-900414-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 33s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-900414-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-900414-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-900414-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-900414-m03 event: Registered Node ha-900414-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-900414-m03 event: Registered Node ha-900414-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-900414-m03 event: Registered Node ha-900414-m03 in Controller
	  Normal   RegisteredNode           83s                node-controller  Node ha-900414-m03 event: Registered Node ha-900414-m03 in Controller
	  Normal   RegisteredNode           82s                node-controller  Node ha-900414-m03 event: Registered Node ha-900414-m03 in Controller
	  Normal   Starting                 49s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  49s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  49s                kubelet          Node ha-900414-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    49s                kubelet          Node ha-900414-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     49s                kubelet          Node ha-900414-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 49s                kubelet          Node ha-900414-m03 has been rebooted, boot id: f111beef-5fa6-45b3-ac23-0df888c08edb
	  Normal   RegisteredNode           19s                node-controller  Node ha-900414-m03 event: Registered Node ha-900414-m03 in Controller
	
	
	Name:               ha-900414-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-900414-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=ha-900414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_20_07_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:20:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-900414-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:29:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:29:57 +0000   Mon, 29 Jul 2024 17:29:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:29:57 +0000   Mon, 29 Jul 2024 17:29:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:29:57 +0000   Mon, 29 Jul 2024 17:29:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:29:57 +0000   Mon, 29 Jul 2024 17:29:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    ha-900414-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 82b534ad740b47cbae65e1e5acf41d9a
	  System UUID:                82b534ad-740b-47cb-ae65-e1e5acf41d9a
	  Boot ID:                    9322d5b7-493e-4e2b-ba3e-df1b1dd665c0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4fsvj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m59s
	  kube-system                 kube-proxy-hf5lx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   Starting                 9m54s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m59s (x3 over 9m59s)  kubelet          Node ha-900414-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m59s (x3 over 9m59s)  kubelet          Node ha-900414-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m59s (x3 over 9m59s)  kubelet          Node ha-900414-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m58s                  node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal   RegisteredNode           9m55s                  node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal   RegisteredNode           9m54s                  node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal   NodeReady                9m40s                  kubelet          Node ha-900414-m04 status is now: NodeReady
	  Normal   RegisteredNode           83s                    node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal   RegisteredNode           82s                    node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal   NodeNotReady             43s                    node-controller  Node ha-900414-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           19s                    node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal   Starting                 8s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)        kubelet          Node ha-900414-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)        kubelet          Node ha-900414-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)        kubelet          Node ha-900414-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                     kubelet          Node ha-900414-m04 has been rebooted, boot id: 9322d5b7-493e-4e2b-ba3e-df1b1dd665c0
	  Normal   NodeReady                8s                     kubelet          Node ha-900414-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.778112] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.061146] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062274] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.167660] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.152034] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.281641] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.241432] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +5.177057] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.055961] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.040447] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.087819] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.082632] kauditd_printk_skb: 21 callbacks suppressed
	[Jul29 17:17] kauditd_printk_skb: 38 callbacks suppressed
	[ +45.217420] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 17:27] systemd-fstab-generator[3668]: Ignoring "noauto" option for root device
	[  +0.158652] systemd-fstab-generator[3680]: Ignoring "noauto" option for root device
	[  +0.176057] systemd-fstab-generator[3699]: Ignoring "noauto" option for root device
	[  +0.143060] systemd-fstab-generator[3711]: Ignoring "noauto" option for root device
	[  +0.269167] systemd-fstab-generator[3739]: Ignoring "noauto" option for root device
	[  +0.807853] systemd-fstab-generator[3842]: Ignoring "noauto" option for root device
	[  +5.953703] kauditd_printk_skb: 122 callbacks suppressed
	[Jul29 17:28] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.070535] kauditd_printk_skb: 1 callbacks suppressed
	[ +17.340648] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b] <==
	2024/07/29 17:26:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T17:26:12.206438Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"648.413457ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T17:26:12.206486Z","caller":"traceutil/trace.go:171","msg":"trace[558497648] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; }","duration":"648.586686ms","start":"2024-07-29T17:26:11.557892Z","end":"2024-07-29T17:26:12.206479Z","steps":["trace[558497648] 'agreement among raft nodes before linearized reading'  (duration: 648.53391ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T17:26:12.206515Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T17:26:11.557879Z","time spent":"648.623025ms","remote":"127.0.0.1:43202","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:500 "}
	2024/07/29 17:26:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T17:26:12.265193Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.114:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T17:26:12.265333Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.114:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T17:26:12.271164Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"7df1350fafd42bce","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T17:26:12.271305Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:26:12.271337Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:26:12.271362Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:26:12.27148Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7df1350fafd42bce","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:26:12.271533Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7df1350fafd42bce","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:26:12.27158Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7df1350fafd42bce","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:26:12.271591Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:26:12.271596Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d9be55a8daa69990"}
	{"level":"info","ts":"2024-07-29T17:26:12.271607Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d9be55a8daa69990"}
	{"level":"info","ts":"2024-07-29T17:26:12.271648Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d9be55a8daa69990"}
	{"level":"info","ts":"2024-07-29T17:26:12.271848Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990"}
	{"level":"info","ts":"2024-07-29T17:26:12.271889Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990"}
	{"level":"info","ts":"2024-07-29T17:26:12.271978Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990"}
	{"level":"info","ts":"2024-07-29T17:26:12.272004Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d9be55a8daa69990"}
	{"level":"info","ts":"2024-07-29T17:26:12.274578Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.114:2380"}
	{"level":"info","ts":"2024-07-29T17:26:12.274706Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.114:2380"}
	{"level":"info","ts":"2024-07-29T17:26:12.274717Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-900414","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.114:2380"],"advertise-client-urls":["https://192.168.39.114:2379"]}
	
	
	==> etcd [96d199d063086572e53c3056fe3c36873a53cd3fe0bc1d0c796366f2c85d8b47] <==
	{"level":"warn","ts":"2024-07-29T17:29:17.872625Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"add4e939ac9b709a","rtt":"0s","error":"dial tcp 192.168.39.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T17:29:17.872898Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"add4e939ac9b709a","rtt":"0s","error":"dial tcp 192.168.39.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T17:29:19.045794Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.6:2380/version","remote-member-id":"add4e939ac9b709a","error":"Get \"https://192.168.39.6:2380/version\": dial tcp 192.168.39.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T17:29:19.046065Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"add4e939ac9b709a","error":"Get \"https://192.168.39.6:2380/version\": dial tcp 192.168.39.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T17:29:22.8733Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"add4e939ac9b709a","rtt":"0s","error":"dial tcp 192.168.39.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T17:29:22.87334Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"add4e939ac9b709a","rtt":"0s","error":"dial tcp 192.168.39.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T17:29:23.047806Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.6:2380/version","remote-member-id":"add4e939ac9b709a","error":"Get \"https://192.168.39.6:2380/version\": dial tcp 192.168.39.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T17:29:23.04803Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"add4e939ac9b709a","error":"Get \"https://192.168.39.6:2380/version\": dial tcp 192.168.39.6:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-29T17:29:24.07833Z","caller":"traceutil/trace.go:171","msg":"trace[548773145] transaction","detail":"{read_only:false; response_revision:2345; number_of_response:1; }","duration":"105.458139ms","start":"2024-07-29T17:29:23.972836Z","end":"2024-07-29T17:29:24.078294Z","steps":["trace[548773145] 'process raft request'  (duration: 105.315532ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T17:29:27.050008Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.6:2380/version","remote-member-id":"add4e939ac9b709a","error":"Get \"https://192.168.39.6:2380/version\": dial tcp 192.168.39.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T17:29:27.050142Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"add4e939ac9b709a","error":"Get \"https://192.168.39.6:2380/version\": dial tcp 192.168.39.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T17:29:27.874525Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"add4e939ac9b709a","rtt":"0s","error":"dial tcp 192.168.39.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T17:29:27.874609Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"add4e939ac9b709a","rtt":"0s","error":"dial tcp 192.168.39.6:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-29T17:29:28.265627Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7df1350fafd42bce","to":"add4e939ac9b709a","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-29T17:29:28.265684Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:29:28.265707Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"7df1350fafd42bce","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:29:28.271412Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7df1350fafd42bce","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:29:28.274706Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7df1350fafd42bce","to":"add4e939ac9b709a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T17:29:28.274798Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"7df1350fafd42bce","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:29:28.27474Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7df1350fafd42bce","remote-peer-id":"add4e939ac9b709a"}
	{"level":"warn","ts":"2024-07-29T17:30:02.699736Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.094094ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-wnfsb\" ","response":"range_response_count:1 size:4661"}
	{"level":"info","ts":"2024-07-29T17:30:02.700298Z","caller":"traceutil/trace.go:171","msg":"trace[902550932] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-wnfsb; range_end:; response_count:1; response_revision:2502; }","duration":"147.805095ms","start":"2024-07-29T17:30:02.552463Z","end":"2024-07-29T17:30:02.700268Z","steps":["trace[902550932] 'agreement among raft nodes before linearized reading'  (duration: 99.880067ms)","trace[902550932] 'range keys from in-memory index tree'  (duration: 47.149182ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T17:30:02.700058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.48159ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-07-29T17:30:02.700614Z","caller":"traceutil/trace.go:171","msg":"trace[1086273481] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2503; }","duration":"143.027834ms","start":"2024-07-29T17:30:02.557542Z","end":"2024-07-29T17:30:02.70057Z","steps":["trace[1086273481] 'agreement among raft nodes before linearized reading'  (duration: 142.361019ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:30:02.700118Z","caller":"traceutil/trace.go:171","msg":"trace[439385542] transaction","detail":"{read_only:false; response_revision:2503; number_of_response:1; }","duration":"151.903879ms","start":"2024-07-29T17:30:02.548197Z","end":"2024-07-29T17:30:02.700101Z","steps":["trace[439385542] 'process raft request'  (duration: 103.905744ms)","trace[439385542] 'compare'  (duration: 47.71552ms)"],"step_count":2}
	
	
	==> kernel <==
	 17:30:05 up 14 min,  0 users,  load average: 0.43, 0.41, 0.27
	Linux ha-900414 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38] <==
	I0729 17:25:42.577057       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:25:42.577099       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:25:42.577245       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:25:42.577270       1 main.go:299] handling current node
	I0729 17:25:42.577299       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:25:42.577304       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:25:42.577366       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0729 17:25:42.577371       1 main.go:322] Node ha-900414-m03 has CIDR [10.244.2.0/24] 
	I0729 17:25:52.573504       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:25:52.573575       1 main.go:299] handling current node
	I0729 17:25:52.573595       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:25:52.573603       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:25:52.573764       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0729 17:25:52.573787       1 main.go:322] Node ha-900414-m03 has CIDR [10.244.2.0/24] 
	I0729 17:25:52.573844       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:25:52.573865       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:26:02.569975       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:26:02.570090       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:26:02.570315       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0729 17:26:02.570394       1 main.go:322] Node ha-900414-m03 has CIDR [10.244.2.0/24] 
	I0729 17:26:02.570490       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:26:02.570512       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:26:02.570585       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:26:02.570605       1 main.go:299] handling current node
	E0729 17:26:10.719217       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	
	
	==> kindnet [478caa49a9e2e5f11ac1dfc8c6e870c29f3045b994c601e6cd646952b9c0de2f] <==
	I0729 17:29:33.086588       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:29:43.085169       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:29:43.085218       1 main.go:299] handling current node
	I0729 17:29:43.085244       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:29:43.085250       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:29:43.085413       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0729 17:29:43.085437       1 main.go:322] Node ha-900414-m03 has CIDR [10.244.2.0/24] 
	I0729 17:29:43.085489       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:29:43.085509       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:29:53.082316       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:29:53.082424       1 main.go:299] handling current node
	I0729 17:29:53.082451       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:29:53.082459       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:29:53.082670       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0729 17:29:53.082681       1 main.go:322] Node ha-900414-m03 has CIDR [10.244.2.0/24] 
	I0729 17:29:53.082729       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:29:53.082752       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:30:03.087069       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:30:03.087221       1 main.go:299] handling current node
	I0729 17:30:03.087251       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:30:03.087270       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:30:03.087432       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0729 17:30:03.087457       1 main.go:322] Node ha-900414-m03 has CIDR [10.244.2.0/24] 
	I0729 17:30:03.087514       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:30:03.087531       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [237f4f9a22aab5537900193f82504f315f0a18522b4ec147a18810a7207e9d03] <==
	I0729 17:28:31.053573       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0729 17:28:31.053613       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0729 17:28:31.053645       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0729 17:28:31.133732       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 17:28:31.133881       1 policy_source.go:224] refreshing policies
	I0729 17:28:31.138630       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 17:28:31.147697       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 17:28:31.147803       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 17:28:31.148164       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 17:28:31.148193       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 17:28:31.148647       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 17:28:31.148702       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 17:28:31.148981       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 17:28:31.149200       1 aggregator.go:165] initial CRD sync complete...
	I0729 17:28:31.149273       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 17:28:31.149359       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 17:28:31.149421       1 cache.go:39] Caches are synced for autoregister controller
	I0729 17:28:31.155908       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0729 17:28:31.159534       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.111 192.168.39.6]
	I0729 17:28:31.160844       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 17:28:31.170378       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0729 17:28:31.190746       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0729 17:28:31.224315       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 17:28:32.063847       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 17:28:32.511551       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.111 192.168.39.114 192.168.39.6]
	
	
	==> kube-apiserver [d511eeca56c8ff8ccbb88e762a12ef7258f1c2175101320dda0553e82887c297] <==
	I0729 17:27:52.334061       1 options.go:221] external host was not specified, using 192.168.39.114
	I0729 17:27:52.347705       1 server.go:148] Version: v1.30.3
	I0729 17:27:52.347770       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:27:52.954753       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 17:27:52.963999       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 17:27:52.964128       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 17:27:52.964153       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 17:27:52.964319       1 instance.go:299] Using reconciler: lease
	W0729 17:28:12.952891       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0729 17:28:12.953414       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0729 17:28:12.965355       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0729 17:28:12.965355       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [824354c7c16e9e8af404495001dc5d595a7cbc026c918156cf68ed850d7c19e8] <==
	I0729 17:28:43.544910       1 shared_informer.go:320] Caches are synced for disruption
	I0729 17:28:43.556470       1 shared_informer.go:320] Caches are synced for HPA
	I0729 17:28:43.607896       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0729 17:28:43.608132       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 17:28:43.608682       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0729 17:28:43.609338       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0729 17:28:43.610513       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0729 17:28:43.641230       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 17:28:43.660009       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0729 17:28:44.076233       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 17:28:44.126630       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 17:28:44.126670       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 17:28:53.055190       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-x4xpd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-x4xpd\": the object has been modified; please apply your changes to the latest version and try again"
	I0729 17:28:53.055792       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"5c994334-3ac8-4f46-9d5a-681cf5767a10", APIVersion:"v1", ResourceVersion:"246", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-x4xpd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-x4xpd": the object has been modified; please apply your changes to the latest version and try again
	I0729 17:28:53.070204       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-x4xpd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-x4xpd\": the object has been modified; please apply your changes to the latest version and try again"
	I0729 17:28:53.070288       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"5c994334-3ac8-4f46-9d5a-681cf5767a10", APIVersion:"v1", ResourceVersion:"246", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-x4xpd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-x4xpd": the object has been modified; please apply your changes to the latest version and try again
	I0729 17:28:53.096070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="90.658484ms"
	I0729 17:28:53.096207       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.194µs"
	I0729 17:28:53.114139       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.501479ms"
	I0729 17:28:53.114741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.177µs"
	I0729 17:29:17.332597       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.711031ms"
	I0729 17:29:17.332894       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="116.503µs"
	I0729 17:29:35.727242       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.204649ms"
	I0729 17:29:35.728151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.905µs"
	I0729 17:29:57.272455       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-900414-m04"
	
	
	==> kube-controller-manager [96142230a5d1570c505a5127e3d4b0025e6c120c808eec6b1579291d9de14bb9] <==
	I0729 17:27:52.856341       1 serving.go:380] Generated self-signed cert in-memory
	I0729 17:27:53.462620       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 17:27:53.462675       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:27:53.464454       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 17:27:53.464598       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 17:27:53.464791       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 17:27:53.465036       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 17:28:13.973747       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.114:8443/healthz\": dial tcp 192.168.39.114:8443: connect: connection refused"
	
	
	==> kube-proxy [37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9] <==
	E0729 17:24:54.585578       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:24:54.585371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-900414&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:24:54.585614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-900414&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:01.625399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:01.625470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:01.625403       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-900414&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:01.625507       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-900414&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:01.625762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:01.625901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:10.842335       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:10.842731       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:13.913501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:13.913554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:13.913619       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-900414&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:13.913636       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-900414&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:32.346213       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:32.346333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:32.346598       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:32.346651       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:32.346736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-900414&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:32.346792       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-900414&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:26:03.066557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:26:03.066665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:26:09.210117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:26:09.210672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [f74b6fc801cbb9e29df38fb778b6af9db3ab8950818cd18c03383e749fc4190a] <==
	I0729 17:27:53.490852       1 server_linux.go:69] "Using iptables proxy"
	E0729 17:27:53.659567       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-900414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 17:27:56.730026       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-900414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 17:27:59.801662       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-900414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 17:28:05.945581       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-900414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 17:28:15.162464       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-900414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0729 17:28:33.411661       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.114"]
	I0729 17:28:33.453166       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 17:28:33.453320       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 17:28:33.453352       1 server_linux.go:165] "Using iptables Proxier"
	I0729 17:28:33.457070       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 17:28:33.458044       1 server.go:872] "Version info" version="v1.30.3"
	I0729 17:28:33.458154       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:28:33.460518       1 config.go:192] "Starting service config controller"
	I0729 17:28:33.460589       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 17:28:33.460640       1 config.go:101] "Starting endpoint slice config controller"
	I0729 17:28:33.460657       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 17:28:33.462595       1 config.go:319] "Starting node config controller"
	I0729 17:28:33.463414       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 17:28:33.560906       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 17:28:33.561011       1 shared_informer.go:320] Caches are synced for service config
	I0729 17:28:33.564167       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf] <==
	W0729 17:26:07.794760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 17:26:07.794850       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 17:26:07.902269       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 17:26:07.902400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 17:26:08.011995       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 17:26:08.012048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 17:26:08.818148       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 17:26:08.818276       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 17:26:09.071257       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 17:26:09.071347       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 17:26:09.167572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 17:26:09.167662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 17:26:09.319179       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 17:26:09.319230       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 17:26:09.319355       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 17:26:09.319390       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 17:26:09.477983       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 17:26:09.478081       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 17:26:09.628764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 17:26:09.628813       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 17:26:10.398031       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 17:26:10.398123       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0729 17:26:12.176222       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 17:26:12.200716       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0729 17:26:12.201651       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c4b08de2e88b41b61b1403daba370b669abfd1acee1793945733079da7004a6e] <==
	W0729 17:28:22.152709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.114:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:22.152793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.114:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:22.175320       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.114:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:22.175536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.114:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:22.567262       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.114:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:22.567374       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.114:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:22.715419       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.114:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:22.715457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.114:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:23.024540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.114:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:23.024601       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.114:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:23.317883       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.114:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:23.318040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.114:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:23.673885       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.114:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:23.673997       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.114:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:28.280734       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.114:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:28.280857       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.114:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:28.441902       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.114:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:28.442177       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.114:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:28.629372       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.114:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:28.629524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.114:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:28.822131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.114:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:28.822245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.114:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:31.099378       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 17:28:31.099428       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0729 17:28:53.978118       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 17:28:21 ha-900414 kubelet[1377]: E0729 17:28:21.305766    1377 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&resourceVersion=1849": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 29 17:28:24 ha-900414 kubelet[1377]: I0729 17:28:24.377364    1377 status_manager.go:853] "Failed to get status for pod" podUID="50fa96e8-1ee5-4e09-a734-802dbcd02bcc" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 17:28:24 ha-900414 kubelet[1377]: W0729 17:28:24.377383    1377 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 29 17:28:24 ha-900414 kubelet[1377]: E0729 17:28:24.378083    1377 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 29 17:28:27 ha-900414 kubelet[1377]: E0729 17:28:27.449604    1377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-900414?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 29 17:28:27 ha-900414 kubelet[1377]: I0729 17:28:27.450122    1377 status_manager.go:853] "Failed to get status for pod" podUID="306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d" pod="kube-system/coredns-7db6d8ff4d-48j6w" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-48j6w\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 17:28:27 ha-900414 kubelet[1377]: E0729 17:28:27.450103    1377 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-900414.17e6bef2de491d36  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-900414,UID:3dc461575e1c166c1aa8b00d38af205a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-900414,},FirstTimestamp:2024-07-29 17:24:17.836490038 +0000 UTC m=+454.266302510,LastTimestamp:2024-07-29 17:24:17.836490038 +0000 UTC m=+454.266302510,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related
:nil,ReportingController:kubelet,ReportingInstance:ha-900414,}"
	Jul 29 17:28:28 ha-900414 kubelet[1377]: I0729 17:28:28.741679    1377 scope.go:117] "RemoveContainer" containerID="d511eeca56c8ff8ccbb88e762a12ef7258f1c2175101320dda0553e82887c297"
	Jul 29 17:28:29 ha-900414 kubelet[1377]: I0729 17:28:29.741518    1377 scope.go:117] "RemoveContainer" containerID="96142230a5d1570c505a5127e3d4b0025e6c120c808eec6b1579291d9de14bb9"
	Jul 29 17:28:30 ha-900414 kubelet[1377]: I0729 17:28:30.521236    1377 status_manager.go:853] "Failed to get status for pod" podUID="bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6" pod="default/busybox-fc5497c4f-4fv4t" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/pods/busybox-fc5497c4f-4fv4t\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 17:28:30 ha-900414 kubelet[1377]: I0729 17:28:30.741663    1377 scope.go:117] "RemoveContainer" containerID="67fcbfdd88d8ba74c5d1c8a633f19a95728f6d2649e20c14873f44a9a83cb5fb"
	Jul 29 17:28:43 ha-900414 kubelet[1377]: I0729 17:28:43.773532    1377 scope.go:117] "RemoveContainer" containerID="e4d3e21e2fdd017f698d1d9d2ba208122c495a9c6273542bd16759ffc40e16a1"
	Jul 29 17:28:43 ha-900414 kubelet[1377]: E0729 17:28:43.808680    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:28:43 ha-900414 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:28:43 ha-900414 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:28:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:28:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:28:44 ha-900414 kubelet[1377]: I0729 17:28:44.165389    1377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-4fv4t" podStartSLOduration=555.272981287 podStartE2EDuration="9m16.165332205s" podCreationTimestamp="2024-07-29 17:19:28 +0000 UTC" firstStartedPulling="2024-07-29 17:19:29.383688696 +0000 UTC m=+165.813501167" lastFinishedPulling="2024-07-29 17:19:30.276039603 +0000 UTC m=+166.705852085" observedRunningTime="2024-07-29 17:19:30.501152176 +0000 UTC m=+166.930964668" watchObservedRunningTime="2024-07-29 17:28:44.165332205 +0000 UTC m=+720.595144694"
	Jul 29 17:29:12 ha-900414 kubelet[1377]: I0729 17:29:12.740862    1377 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-900414" podUID="bf3918b4-6cc5-499b-808e-b6c33138cae2"
	Jul 29 17:29:12 ha-900414 kubelet[1377]: I0729 17:29:12.765878    1377 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-900414"
	Jul 29 17:29:43 ha-900414 kubelet[1377]: E0729 17:29:43.788070    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:29:43 ha-900414 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:29:43 ha-900414 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:29:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:29:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 17:30:04.629463   37755 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19345-11206/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-900414 -n ha-900414
helpers_test.go:261: (dbg) Run:  kubectl --context ha-900414 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (357.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 stop -v=7 --alsologtostderr
E0729 17:31:52.903026   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-900414 stop -v=7 --alsologtostderr: exit status 82 (2m0.462473216s)

                                                
                                                
-- stdout --
	* Stopping node "ha-900414-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:30:24.319320   38165 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:30:24.319418   38165 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:30:24.319428   38165 out.go:304] Setting ErrFile to fd 2...
	I0729 17:30:24.319432   38165 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:30:24.319608   38165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:30:24.319815   38165 out.go:298] Setting JSON to false
	I0729 17:30:24.319878   38165 mustload.go:65] Loading cluster: ha-900414
	I0729 17:30:24.320207   38165 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:30:24.320283   38165 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:30:24.320449   38165 mustload.go:65] Loading cluster: ha-900414
	I0729 17:30:24.320563   38165 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:30:24.320586   38165 stop.go:39] StopHost: ha-900414-m04
	I0729 17:30:24.320889   38165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:30:24.320923   38165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:30:24.335314   38165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43841
	I0729 17:30:24.335746   38165 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:30:24.336340   38165 main.go:141] libmachine: Using API Version  1
	I0729 17:30:24.336366   38165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:30:24.336684   38165 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:30:24.339108   38165 out.go:177] * Stopping node "ha-900414-m04"  ...
	I0729 17:30:24.340236   38165 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 17:30:24.340263   38165 main.go:141] libmachine: (ha-900414-m04) Calling .DriverName
	I0729 17:30:24.340456   38165 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 17:30:24.340490   38165 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHHostname
	I0729 17:30:24.343384   38165 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:30:24.343855   38165 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:29:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:30:24.343897   38165 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:30:24.344027   38165 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHPort
	I0729 17:30:24.344182   38165 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHKeyPath
	I0729 17:30:24.344330   38165 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHUsername
	I0729 17:30:24.344468   38165 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m04/id_rsa Username:docker}
	I0729 17:30:24.428837   38165 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 17:30:24.482320   38165 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 17:30:24.535858   38165 main.go:141] libmachine: Stopping "ha-900414-m04"...
	I0729 17:30:24.535894   38165 main.go:141] libmachine: (ha-900414-m04) Calling .GetState
	I0729 17:30:24.537447   38165 main.go:141] libmachine: (ha-900414-m04) Calling .Stop
	I0729 17:30:24.541411   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 0/120
	I0729 17:30:25.543687   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 1/120
	I0729 17:30:26.544946   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 2/120
	I0729 17:30:27.546494   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 3/120
	I0729 17:30:28.548727   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 4/120
	I0729 17:30:29.550686   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 5/120
	I0729 17:30:30.552872   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 6/120
	I0729 17:30:31.554155   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 7/120
	I0729 17:30:32.556047   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 8/120
	I0729 17:30:33.557438   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 9/120
	I0729 17:30:34.559293   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 10/120
	I0729 17:30:35.560755   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 11/120
	I0729 17:30:36.562438   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 12/120
	I0729 17:30:37.564433   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 13/120
	I0729 17:30:38.565989   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 14/120
	I0729 17:30:39.567670   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 15/120
	I0729 17:30:40.568923   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 16/120
	I0729 17:30:41.570190   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 17/120
	I0729 17:30:42.571724   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 18/120
	I0729 17:30:43.573074   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 19/120
	I0729 17:30:44.574702   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 20/120
	I0729 17:30:45.576725   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 21/120
	I0729 17:30:46.577985   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 22/120
	I0729 17:30:47.579171   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 23/120
	I0729 17:30:48.580464   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 24/120
	I0729 17:30:49.582381   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 25/120
	I0729 17:30:50.584024   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 26/120
	I0729 17:30:51.585385   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 27/120
	I0729 17:30:52.586611   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 28/120
	I0729 17:30:53.588900   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 29/120
	I0729 17:30:54.590715   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 30/120
	I0729 17:30:55.592824   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 31/120
	I0729 17:30:56.594132   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 32/120
	I0729 17:30:57.595376   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 33/120
	I0729 17:30:58.597358   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 34/120
	I0729 17:30:59.599256   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 35/120
	I0729 17:31:00.600507   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 36/120
	I0729 17:31:01.601766   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 37/120
	I0729 17:31:02.603297   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 38/120
	I0729 17:31:03.604672   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 39/120
	I0729 17:31:04.606789   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 40/120
	I0729 17:31:05.608649   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 41/120
	I0729 17:31:06.609818   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 42/120
	I0729 17:31:07.610976   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 43/120
	I0729 17:31:08.612155   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 44/120
	I0729 17:31:09.613917   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 45/120
	I0729 17:31:10.615226   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 46/120
	I0729 17:31:11.616642   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 47/120
	I0729 17:31:12.617976   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 48/120
	I0729 17:31:13.619308   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 49/120
	I0729 17:31:14.620596   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 50/120
	I0729 17:31:15.621931   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 51/120
	I0729 17:31:16.623217   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 52/120
	I0729 17:31:17.624795   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 53/120
	I0729 17:31:18.626234   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 54/120
	I0729 17:31:19.627560   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 55/120
	I0729 17:31:20.628778   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 56/120
	I0729 17:31:21.630161   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 57/120
	I0729 17:31:22.631339   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 58/120
	I0729 17:31:23.632816   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 59/120
	I0729 17:31:24.634967   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 60/120
	I0729 17:31:25.636070   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 61/120
	I0729 17:31:26.637582   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 62/120
	I0729 17:31:27.638857   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 63/120
	I0729 17:31:28.640265   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 64/120
	I0729 17:31:29.642535   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 65/120
	I0729 17:31:30.643832   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 66/120
	I0729 17:31:31.645152   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 67/120
	I0729 17:31:32.646568   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 68/120
	I0729 17:31:33.647892   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 69/120
	I0729 17:31:34.649765   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 70/120
	I0729 17:31:35.651313   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 71/120
	I0729 17:31:36.652744   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 72/120
	I0729 17:31:37.653886   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 73/120
	I0729 17:31:38.655291   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 74/120
	I0729 17:31:39.657454   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 75/120
	I0729 17:31:40.658753   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 76/120
	I0729 17:31:41.660120   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 77/120
	I0729 17:31:42.662061   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 78/120
	I0729 17:31:43.664154   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 79/120
	I0729 17:31:44.665803   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 80/120
	I0729 17:31:45.667074   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 81/120
	I0729 17:31:46.668672   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 82/120
	I0729 17:31:47.670869   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 83/120
	I0729 17:31:48.672613   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 84/120
	I0729 17:31:49.674423   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 85/120
	I0729 17:31:50.675958   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 86/120
	I0729 17:31:51.678304   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 87/120
	I0729 17:31:52.680426   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 88/120
	I0729 17:31:53.681696   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 89/120
	I0729 17:31:54.683914   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 90/120
	I0729 17:31:55.685609   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 91/120
	I0729 17:31:56.686951   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 92/120
	I0729 17:31:57.688945   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 93/120
	I0729 17:31:58.690849   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 94/120
	I0729 17:31:59.692563   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 95/120
	I0729 17:32:00.694343   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 96/120
	I0729 17:32:01.695885   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 97/120
	I0729 17:32:02.697036   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 98/120
	I0729 17:32:03.699338   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 99/120
	I0729 17:32:04.700856   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 100/120
	I0729 17:32:05.702326   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 101/120
	I0729 17:32:06.703566   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 102/120
	I0729 17:32:07.705015   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 103/120
	I0729 17:32:08.706204   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 104/120
	I0729 17:32:09.707909   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 105/120
	I0729 17:32:10.709214   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 106/120
	I0729 17:32:11.710602   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 107/120
	I0729 17:32:12.712755   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 108/120
	I0729 17:32:13.714054   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 109/120
	I0729 17:32:14.716104   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 110/120
	I0729 17:32:15.717423   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 111/120
	I0729 17:32:16.718651   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 112/120
	I0729 17:32:17.720786   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 113/120
	I0729 17:32:18.722619   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 114/120
	I0729 17:32:19.724496   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 115/120
	I0729 17:32:20.726005   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 116/120
	I0729 17:32:21.727262   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 117/120
	I0729 17:32:22.728546   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 118/120
	I0729 17:32:23.730723   38165 main.go:141] libmachine: (ha-900414-m04) Waiting for machine to stop 119/120
	I0729 17:32:24.732036   38165 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 17:32:24.732100   38165 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 17:32:24.734146   38165 out.go:177] 
	W0729 17:32:24.735594   38165 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 17:32:24.735612   38165 out.go:239] * 
	* 
	W0729 17:32:24.738504   38165 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:32:24.739733   38165 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-900414 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr: exit status 3 (18.897601079s)

                                                
                                                
-- stdout --
	ha-900414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-900414-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:32:24.781798   38593 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:32:24.781894   38593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:32:24.781900   38593 out.go:304] Setting ErrFile to fd 2...
	I0729 17:32:24.781904   38593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:32:24.782094   38593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:32:24.782243   38593 out.go:298] Setting JSON to false
	I0729 17:32:24.782265   38593 mustload.go:65] Loading cluster: ha-900414
	I0729 17:32:24.782382   38593 notify.go:220] Checking for updates...
	I0729 17:32:24.782637   38593 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:32:24.782653   38593 status.go:255] checking status of ha-900414 ...
	I0729 17:32:24.783015   38593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:32:24.783071   38593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:32:24.803319   38593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44165
	I0729 17:32:24.803722   38593 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:32:24.804239   38593 main.go:141] libmachine: Using API Version  1
	I0729 17:32:24.804260   38593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:32:24.804600   38593 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:32:24.804754   38593 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:32:24.806164   38593 status.go:330] ha-900414 host status = "Running" (err=<nil>)
	I0729 17:32:24.806178   38593 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:32:24.806456   38593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:32:24.806502   38593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:32:24.820686   38593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33155
	I0729 17:32:24.821042   38593 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:32:24.821536   38593 main.go:141] libmachine: Using API Version  1
	I0729 17:32:24.821563   38593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:32:24.821863   38593 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:32:24.822052   38593 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:32:24.825061   38593 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:32:24.825481   38593 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:32:24.825516   38593 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:32:24.825604   38593 host.go:66] Checking if "ha-900414" exists ...
	I0729 17:32:24.825853   38593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:32:24.825884   38593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:32:24.839529   38593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46777
	I0729 17:32:24.839864   38593 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:32:24.840207   38593 main.go:141] libmachine: Using API Version  1
	I0729 17:32:24.840219   38593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:32:24.840516   38593 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:32:24.840720   38593 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:32:24.840903   38593 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:32:24.840936   38593 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:32:24.843293   38593 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:32:24.843696   38593 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:32:24.843733   38593 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:32:24.843847   38593 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:32:24.843991   38593 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:32:24.844156   38593 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:32:24.844264   38593 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:32:24.927086   38593 ssh_runner.go:195] Run: systemctl --version
	I0729 17:32:24.933679   38593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:32:24.949622   38593 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:32:24.949647   38593 api_server.go:166] Checking apiserver status ...
	I0729 17:32:24.949682   38593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:32:24.964693   38593 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4956/cgroup
	W0729 17:32:24.973723   38593 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4956/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:32:24.973762   38593 ssh_runner.go:195] Run: ls
	I0729 17:32:24.978195   38593 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:32:24.984375   38593 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:32:24.984400   38593 status.go:422] ha-900414 apiserver status = Running (err=<nil>)
	I0729 17:32:24.984427   38593 status.go:257] ha-900414 status: &{Name:ha-900414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:32:24.984452   38593 status.go:255] checking status of ha-900414-m02 ...
	I0729 17:32:24.984724   38593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:32:24.984764   38593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:32:25.000282   38593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45891
	I0729 17:32:25.000640   38593 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:32:25.001152   38593 main.go:141] libmachine: Using API Version  1
	I0729 17:32:25.001170   38593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:32:25.001475   38593 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:32:25.001687   38593 main.go:141] libmachine: (ha-900414-m02) Calling .GetState
	I0729 17:32:25.003230   38593 status.go:330] ha-900414-m02 host status = "Running" (err=<nil>)
	I0729 17:32:25.003250   38593 host.go:66] Checking if "ha-900414-m02" exists ...
	I0729 17:32:25.003581   38593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:32:25.003613   38593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:32:25.017777   38593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45447
	I0729 17:32:25.018225   38593 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:32:25.018731   38593 main.go:141] libmachine: Using API Version  1
	I0729 17:32:25.018756   38593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:32:25.019045   38593 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:32:25.019229   38593 main.go:141] libmachine: (ha-900414-m02) Calling .GetIP
	I0729 17:32:25.022010   38593 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:32:25.022406   38593 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:27:56 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:32:25.022430   38593 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:32:25.022577   38593 host.go:66] Checking if "ha-900414-m02" exists ...
	I0729 17:32:25.022848   38593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:32:25.022882   38593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:32:25.036690   38593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39765
	I0729 17:32:25.037097   38593 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:32:25.037495   38593 main.go:141] libmachine: Using API Version  1
	I0729 17:32:25.037517   38593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:32:25.037836   38593 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:32:25.038012   38593 main.go:141] libmachine: (ha-900414-m02) Calling .DriverName
	I0729 17:32:25.038188   38593 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:32:25.038210   38593 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHHostname
	I0729 17:32:25.040398   38593 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:32:25.040771   38593 main.go:141] libmachine: (ha-900414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:84:83", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:27:56 +0000 UTC Type:0 Mac:52:54:00:a0:84:83 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-900414-m02 Clientid:01:52:54:00:a0:84:83}
	I0729 17:32:25.040808   38593 main.go:141] libmachine: (ha-900414-m02) DBG | domain ha-900414-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:a0:84:83 in network mk-ha-900414
	I0729 17:32:25.040917   38593 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHPort
	I0729 17:32:25.041077   38593 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHKeyPath
	I0729 17:32:25.041241   38593 main.go:141] libmachine: (ha-900414-m02) Calling .GetSSHUsername
	I0729 17:32:25.041385   38593 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m02/id_rsa Username:docker}
	I0729 17:32:25.127870   38593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:32:25.146122   38593 kubeconfig.go:125] found "ha-900414" server: "https://192.168.39.254:8443"
	I0729 17:32:25.146146   38593 api_server.go:166] Checking apiserver status ...
	I0729 17:32:25.146179   38593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:32:25.161696   38593 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1429/cgroup
	W0729 17:32:25.171258   38593 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1429/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:32:25.171306   38593 ssh_runner.go:195] Run: ls
	I0729 17:32:25.175666   38593 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:32:25.180188   38593 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:32:25.180208   38593 status.go:422] ha-900414-m02 apiserver status = Running (err=<nil>)
	I0729 17:32:25.180218   38593 status.go:257] ha-900414-m02 status: &{Name:ha-900414-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:32:25.180235   38593 status.go:255] checking status of ha-900414-m04 ...
	I0729 17:32:25.180539   38593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:32:25.180588   38593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:32:25.196510   38593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0729 17:32:25.196883   38593 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:32:25.197329   38593 main.go:141] libmachine: Using API Version  1
	I0729 17:32:25.197353   38593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:32:25.197665   38593 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:32:25.197849   38593 main.go:141] libmachine: (ha-900414-m04) Calling .GetState
	I0729 17:32:25.199340   38593 status.go:330] ha-900414-m04 host status = "Running" (err=<nil>)
	I0729 17:32:25.199365   38593 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:32:25.199630   38593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:32:25.199660   38593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:32:25.213627   38593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41739
	I0729 17:32:25.213997   38593 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:32:25.214459   38593 main.go:141] libmachine: Using API Version  1
	I0729 17:32:25.214484   38593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:32:25.214783   38593 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:32:25.214919   38593 main.go:141] libmachine: (ha-900414-m04) Calling .GetIP
	I0729 17:32:25.217390   38593 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:32:25.217753   38593 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:29:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:32:25.217778   38593 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:32:25.217896   38593 host.go:66] Checking if "ha-900414-m04" exists ...
	I0729 17:32:25.218157   38593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:32:25.218185   38593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:32:25.232388   38593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46171
	I0729 17:32:25.232753   38593 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:32:25.233159   38593 main.go:141] libmachine: Using API Version  1
	I0729 17:32:25.233181   38593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:32:25.233481   38593 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:32:25.233674   38593 main.go:141] libmachine: (ha-900414-m04) Calling .DriverName
	I0729 17:32:25.233851   38593 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:32:25.233878   38593 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHHostname
	I0729 17:32:25.236227   38593 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:32:25.236605   38593 main.go:141] libmachine: (ha-900414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:eb:e5", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:29:51 +0000 UTC Type:0 Mac:52:54:00:a6:eb:e5 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-900414-m04 Clientid:01:52:54:00:a6:eb:e5}
	I0729 17:32:25.236638   38593 main.go:141] libmachine: (ha-900414-m04) DBG | domain ha-900414-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:a6:eb:e5 in network mk-ha-900414
	I0729 17:32:25.236741   38593 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHPort
	I0729 17:32:25.236879   38593 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHKeyPath
	I0729 17:32:25.237008   38593 main.go:141] libmachine: (ha-900414-m04) Calling .GetSSHUsername
	I0729 17:32:25.237129   38593 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414-m04/id_rsa Username:docker}
	W0729 17:32:43.638530   38593 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.156:22: connect: no route to host
	W0729 17:32:43.638621   38593 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.156:22: connect: no route to host
	E0729 17:32:43.638633   38593 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.156:22: connect: no route to host
	I0729 17:32:43.638648   38593 status.go:257] ha-900414-m04 status: &{Name:ha-900414-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0729 17:32:43.638664   38593 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.156:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-900414 -n ha-900414
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-900414 logs -n 25: (1.672936529s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-900414 ssh -n ha-900414-m02 sudo cat                                          | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m03_ha-900414-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m03:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04:/home/docker/cp-test_ha-900414-m03_ha-900414-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414-m04 sudo cat                                          | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m03_ha-900414-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-900414 cp testdata/cp-test.txt                                                | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3654370545/001/cp-test_ha-900414-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414:/home/docker/cp-test_ha-900414-m04_ha-900414.txt                       |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414 sudo cat                                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m04_ha-900414.txt                                 |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m02:/home/docker/cp-test_ha-900414-m04_ha-900414-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414-m02 sudo cat                                          | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m04_ha-900414-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m03:/home/docker/cp-test_ha-900414-m04_ha-900414-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n                                                                 | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | ha-900414-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-900414 ssh -n ha-900414-m03 sudo cat                                          | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC | 29 Jul 24 17:20 UTC |
	|         | /home/docker/cp-test_ha-900414-m04_ha-900414-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-900414 node stop m02 -v=7                                                     | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-900414 node start m02 -v=7                                                    | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-900414 -v=7                                                           | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-900414 -v=7                                                                | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-900414 --wait=true -v=7                                                    | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:26 UTC | 29 Jul 24 17:30 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-900414                                                                | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:30 UTC |                     |
	| node    | ha-900414 node delete m03 -v=7                                                   | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:30 UTC | 29 Jul 24 17:30 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-900414 stop -v=7                                                              | ha-900414 | jenkins | v1.33.1 | 29 Jul 24 17:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 17:26:11
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 17:26:11.306314   36531 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:26:11.306438   36531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:26:11.306448   36531 out.go:304] Setting ErrFile to fd 2...
	I0729 17:26:11.306455   36531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:26:11.306623   36531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:26:11.307186   36531 out.go:298] Setting JSON to false
	I0729 17:26:11.308015   36531 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4123,"bootTime":1722269848,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:26:11.308078   36531 start.go:139] virtualization: kvm guest
	I0729 17:26:11.310432   36531 out.go:177] * [ha-900414] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 17:26:11.311933   36531 notify.go:220] Checking for updates...
	I0729 17:26:11.311948   36531 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 17:26:11.313147   36531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:26:11.314334   36531 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 17:26:11.315592   36531 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:26:11.316841   36531 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 17:26:11.318012   36531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:26:11.319746   36531 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:26:11.319837   36531 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:26:11.320276   36531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:26:11.320310   36531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:26:11.335861   36531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42545
	I0729 17:26:11.336257   36531 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:26:11.336740   36531 main.go:141] libmachine: Using API Version  1
	I0729 17:26:11.336763   36531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:26:11.337062   36531 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:26:11.337265   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:26:11.371036   36531 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 17:26:11.372289   36531 start.go:297] selected driver: kvm2
	I0729 17:26:11.372300   36531 start.go:901] validating driver "kvm2" against &{Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.156 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:26:11.372435   36531 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:26:11.372801   36531 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:26:11.372870   36531 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 17:26:11.386447   36531 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 17:26:11.387089   36531 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:26:11.387148   36531 cni.go:84] Creating CNI manager for ""
	I0729 17:26:11.387162   36531 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 17:26:11.387211   36531 start.go:340] cluster config:
	{Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.156 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:26:11.387312   36531 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:26:11.388877   36531 out.go:177] * Starting "ha-900414" primary control-plane node in "ha-900414" cluster
	I0729 17:26:11.390002   36531 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:26:11.390032   36531 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 17:26:11.390039   36531 cache.go:56] Caching tarball of preloaded images
	I0729 17:26:11.390122   36531 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:26:11.390134   36531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:26:11.390244   36531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/config.json ...
	I0729 17:26:11.390492   36531 start.go:360] acquireMachinesLock for ha-900414: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:26:11.390536   36531 start.go:364] duration metric: took 24.749µs to acquireMachinesLock for "ha-900414"
	I0729 17:26:11.390549   36531 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:26:11.390556   36531 fix.go:54] fixHost starting: 
	I0729 17:26:11.390806   36531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:26:11.390834   36531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:26:11.404583   36531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44933
	I0729 17:26:11.404984   36531 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:26:11.405480   36531 main.go:141] libmachine: Using API Version  1
	I0729 17:26:11.405500   36531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:26:11.405803   36531 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:26:11.405962   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:26:11.406118   36531 main.go:141] libmachine: (ha-900414) Calling .GetState
	I0729 17:26:11.407521   36531 fix.go:112] recreateIfNeeded on ha-900414: state=Running err=<nil>
	W0729 17:26:11.407561   36531 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:26:11.409223   36531 out.go:177] * Updating the running kvm2 "ha-900414" VM ...
	I0729 17:26:11.410314   36531 machine.go:94] provisionDockerMachine start ...
	I0729 17:26:11.410330   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:26:11.410540   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:26:11.413424   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.413930   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:26:11.413964   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.414168   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:26:11.414325   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:11.414479   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:11.414638   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:26:11.414787   36531 main.go:141] libmachine: Using SSH client type: native
	I0729 17:26:11.415015   36531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:26:11.415031   36531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 17:26:11.523581   36531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-900414
	
	I0729 17:26:11.523609   36531 main.go:141] libmachine: (ha-900414) Calling .GetMachineName
	I0729 17:26:11.523807   36531 buildroot.go:166] provisioning hostname "ha-900414"
	I0729 17:26:11.523827   36531 main.go:141] libmachine: (ha-900414) Calling .GetMachineName
	I0729 17:26:11.524056   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:26:11.526693   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.527104   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:26:11.527130   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.527259   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:26:11.527493   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:11.527635   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:11.527804   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:26:11.527993   36531 main.go:141] libmachine: Using SSH client type: native
	I0729 17:26:11.528179   36531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:26:11.528199   36531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-900414 && echo "ha-900414" | sudo tee /etc/hostname
	I0729 17:26:11.652853   36531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-900414
	
	I0729 17:26:11.652889   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:26:11.655382   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.655722   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:26:11.655762   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.655943   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:26:11.656144   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:11.656282   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:11.656429   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:26:11.656605   36531 main.go:141] libmachine: Using SSH client type: native
	I0729 17:26:11.656767   36531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:26:11.656783   36531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-900414' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-900414/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-900414' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:26:11.763650   36531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:26:11.763673   36531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 17:26:11.763696   36531 buildroot.go:174] setting up certificates
	I0729 17:26:11.763707   36531 provision.go:84] configureAuth start
	I0729 17:26:11.763719   36531 main.go:141] libmachine: (ha-900414) Calling .GetMachineName
	I0729 17:26:11.763949   36531 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:26:11.766441   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.766847   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:26:11.766875   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.767013   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:26:11.769103   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.769432   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:26:11.769457   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.769548   36531 provision.go:143] copyHostCerts
	I0729 17:26:11.769580   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:26:11.769620   36531 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 17:26:11.769630   36531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:26:11.769700   36531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 17:26:11.769797   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:26:11.769818   36531 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 17:26:11.769828   36531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:26:11.769858   36531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 17:26:11.769927   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:26:11.769946   36531 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 17:26:11.769953   36531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:26:11.769978   36531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 17:26:11.770048   36531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.ha-900414 san=[127.0.0.1 192.168.39.114 ha-900414 localhost minikube]
	I0729 17:26:11.896044   36531 provision.go:177] copyRemoteCerts
	I0729 17:26:11.896116   36531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:26:11.896137   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:26:11.898609   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.899074   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:26:11.899103   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:11.899307   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:26:11.899509   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:11.899654   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:26:11.899774   36531 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:26:11.984842   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:26:11.984905   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:26:12.015181   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:26:12.015250   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 17:26:12.043085   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:26:12.043135   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 17:26:12.070247   36531 provision.go:87] duration metric: took 306.52942ms to configureAuth
	I0729 17:26:12.070271   36531 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:26:12.070505   36531 config.go:182] Loaded profile config "ha-900414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:26:12.070578   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:26:12.073032   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:12.073435   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:26:12.073463   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:26:12.073652   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:26:12.073817   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:12.073991   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:26:12.074122   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:26:12.074290   36531 main.go:141] libmachine: Using SSH client type: native
	I0729 17:26:12.074477   36531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:26:12.074492   36531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:27:42.966377   36531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:27:42.966404   36531 machine.go:97] duration metric: took 1m31.556076614s to provisionDockerMachine
	I0729 17:27:42.966439   36531 start.go:293] postStartSetup for "ha-900414" (driver="kvm2")
	I0729 17:27:42.966454   36531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:27:42.966481   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:27:42.966744   36531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:27:42.966770   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:27:42.969405   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:42.969782   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:27:42.969806   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:42.969913   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:27:42.970111   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:27:42.970279   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:27:42.970429   36531 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:27:43.053530   36531 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:27:43.057880   36531 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:27:43.057902   36531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 17:27:43.057975   36531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 17:27:43.058048   36531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 17:27:43.058058   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /etc/ssl/certs/183932.pem
	I0729 17:27:43.058172   36531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:27:43.067925   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:27:43.090884   36531 start.go:296] duration metric: took 124.432235ms for postStartSetup
	I0729 17:27:43.090973   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:27:43.091266   36531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 17:27:43.091293   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:27:43.093776   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.094125   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:27:43.094158   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.094338   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:27:43.094510   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:27:43.094646   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:27:43.094822   36531 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	W0729 17:27:43.176156   36531 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 17:27:43.176177   36531 fix.go:56] duration metric: took 1m31.785620665s for fixHost
	I0729 17:27:43.176196   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:27:43.178680   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.178989   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:27:43.179008   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.179195   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:27:43.179348   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:27:43.179496   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:27:43.179607   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:27:43.179715   36531 main.go:141] libmachine: Using SSH client type: native
	I0729 17:27:43.179874   36531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0729 17:27:43.179891   36531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:27:43.287111   36531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722274063.242453168
	
	I0729 17:27:43.287128   36531 fix.go:216] guest clock: 1722274063.242453168
	I0729 17:27:43.287141   36531 fix.go:229] Guest: 2024-07-29 17:27:43.242453168 +0000 UTC Remote: 2024-07-29 17:27:43.176184223 +0000 UTC m=+91.904326746 (delta=66.268945ms)
	I0729 17:27:43.287165   36531 fix.go:200] guest clock delta is within tolerance: 66.268945ms
	I0729 17:27:43.287171   36531 start.go:83] releasing machines lock for "ha-900414", held for 1m31.896626884s
	I0729 17:27:43.287189   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:27:43.287432   36531 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:27:43.289659   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.290002   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:27:43.290026   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.290169   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:27:43.290662   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:27:43.290821   36531 main.go:141] libmachine: (ha-900414) Calling .DriverName
	I0729 17:27:43.290926   36531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:27:43.290970   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:27:43.290983   36531 ssh_runner.go:195] Run: cat /version.json
	I0729 17:27:43.291002   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHHostname
	I0729 17:27:43.293399   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.293667   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.293744   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:27:43.293767   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.293903   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:27:43.294031   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:27:43.294048   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:27:43.294058   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:43.294214   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:27:43.294231   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHPort
	I0729 17:27:43.294425   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHKeyPath
	I0729 17:27:43.294418   36531 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:27:43.294574   36531 main.go:141] libmachine: (ha-900414) Calling .GetSSHUsername
	I0729 17:27:43.294791   36531 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/ha-900414/id_rsa Username:docker}
	I0729 17:27:43.370956   36531 ssh_runner.go:195] Run: systemctl --version
	I0729 17:27:43.394450   36531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:27:43.556346   36531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:27:43.562410   36531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:27:43.562474   36531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:27:43.571883   36531 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 17:27:43.571908   36531 start.go:495] detecting cgroup driver to use...
	I0729 17:27:43.571998   36531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:27:43.588205   36531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:27:43.601659   36531 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:27:43.601704   36531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:27:43.615321   36531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:27:43.628123   36531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:27:43.783170   36531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:27:43.934601   36531 docker.go:233] disabling docker service ...
	I0729 17:27:43.934662   36531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:27:43.952907   36531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:27:43.966008   36531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:27:44.106465   36531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:27:44.251272   36531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:27:44.267168   36531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:27:44.286018   36531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:27:44.286072   36531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:27:44.296260   36531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:27:44.296315   36531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:27:44.306232   36531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:27:44.315844   36531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:27:44.325970   36531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:27:44.336146   36531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:27:44.346682   36531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:27:44.358201   36531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:27:44.368533   36531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:27:44.377631   36531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:27:44.386655   36531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:27:44.530166   36531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:27:44.822545   36531 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:27:44.822609   36531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:27:44.827763   36531 start.go:563] Will wait 60s for crictl version
	I0729 17:27:44.827813   36531 ssh_runner.go:195] Run: which crictl
	I0729 17:27:44.831550   36531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:27:44.870913   36531 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:27:44.870994   36531 ssh_runner.go:195] Run: crio --version
	I0729 17:27:44.899236   36531 ssh_runner.go:195] Run: crio --version
	I0729 17:27:44.929663   36531 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:27:44.930956   36531 main.go:141] libmachine: (ha-900414) Calling .GetIP
	I0729 17:27:44.933324   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:44.933719   36531 main.go:141] libmachine: (ha-900414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:29:8d", ip: ""} in network mk-ha-900414: {Iface:virbr1 ExpiryTime:2024-07-29 18:16:14 +0000 UTC Type:0 Mac:52:54:00:5a:29:8d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-900414 Clientid:01:52:54:00:5a:29:8d}
	I0729 17:27:44.933740   36531 main.go:141] libmachine: (ha-900414) DBG | domain ha-900414 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:29:8d in network mk-ha-900414
	I0729 17:27:44.933940   36531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:27:44.938459   36531 kubeadm.go:883] updating cluster {Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.156 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 17:27:44.938581   36531 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:27:44.938623   36531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:27:44.982323   36531 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 17:27:44.982343   36531 crio.go:433] Images already preloaded, skipping extraction
	I0729 17:27:44.982405   36531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:27:45.017312   36531 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 17:27:45.017331   36531 cache_images.go:84] Images are preloaded, skipping loading
	I0729 17:27:45.017339   36531 kubeadm.go:934] updating node { 192.168.39.114 8443 v1.30.3 crio true true} ...
	I0729 17:27:45.017429   36531 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-900414 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:27:45.017492   36531 ssh_runner.go:195] Run: crio config
	I0729 17:27:45.075046   36531 cni.go:84] Creating CNI manager for ""
	I0729 17:27:45.075062   36531 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 17:27:45.075071   36531 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 17:27:45.075090   36531 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-900414 NodeName:ha-900414 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 17:27:45.075210   36531 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-900414"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 17:27:45.075230   36531 kube-vip.go:115] generating kube-vip config ...
	I0729 17:27:45.075264   36531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 17:27:45.086751   36531 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 17:27:45.086841   36531 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 17:27:45.086897   36531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:27:45.096373   36531 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 17:27:45.096432   36531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 17:27:45.105323   36531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 17:27:45.121284   36531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:27:45.137139   36531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 17:27:45.153053   36531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 17:27:45.169479   36531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 17:27:45.173779   36531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:27:45.335745   36531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:27:45.350140   36531 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414 for IP: 192.168.39.114
	I0729 17:27:45.350161   36531 certs.go:194] generating shared ca certs ...
	I0729 17:27:45.350174   36531 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:27:45.350299   36531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 17:27:45.350336   36531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 17:27:45.350345   36531 certs.go:256] generating profile certs ...
	I0729 17:27:45.350465   36531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/client.key
	I0729 17:27:45.350500   36531 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.6e6af526
	I0729 17:27:45.350517   36531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.6e6af526 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.111 192.168.39.6 192.168.39.254]
	I0729 17:27:45.444770   36531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.6e6af526 ...
	I0729 17:27:45.444798   36531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.6e6af526: {Name:mkace41337feac31813a88006368e7446bce771b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:27:45.444983   36531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.6e6af526 ...
	I0729 17:27:45.444996   36531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.6e6af526: {Name:mkd9b548f7c68524ec9eba7eee5f22bf0d008e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:27:45.445092   36531 certs.go:381] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt.6e6af526 -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt
	I0729 17:27:45.445252   36531 certs.go:385] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key.6e6af526 -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key
	I0729 17:27:45.445393   36531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key
	I0729 17:27:45.445408   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 17:27:45.445424   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 17:27:45.445440   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 17:27:45.445456   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 17:27:45.445472   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 17:27:45.445486   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 17:27:45.445504   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 17:27:45.445521   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 17:27:45.445580   36531 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 17:27:45.445627   36531 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 17:27:45.445637   36531 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:27:45.445671   36531 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:27:45.445710   36531 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:27:45.445739   36531 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 17:27:45.445780   36531 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:27:45.445809   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem -> /usr/share/ca-certificates/18393.pem
	I0729 17:27:45.445833   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /usr/share/ca-certificates/183932.pem
	I0729 17:27:45.445848   36531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:27:45.446485   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:27:45.472397   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:27:45.495890   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:27:45.519653   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:27:45.542761   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 17:27:45.565458   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 17:27:45.588115   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:27:45.611695   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/ha-900414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:27:45.634924   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 17:27:45.657552   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 17:27:45.680733   36531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:27:45.704067   36531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 17:27:45.720799   36531 ssh_runner.go:195] Run: openssl version
	I0729 17:27:45.726636   36531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 17:27:45.737314   36531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 17:27:45.741720   36531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 17:27:45.741761   36531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 17:27:45.747354   36531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 17:27:45.756749   36531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 17:27:45.767582   36531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 17:27:45.771948   36531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 17:27:45.771995   36531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 17:27:45.777893   36531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:27:45.788034   36531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:27:45.798900   36531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:27:45.803502   36531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:27:45.803549   36531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:27:45.809298   36531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:27:45.818811   36531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:27:45.823291   36531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 17:27:45.828935   36531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 17:27:45.834533   36531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 17:27:45.839895   36531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 17:27:45.845210   36531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 17:27:45.850553   36531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 17:27:45.855890   36531 kubeadm.go:392] StartCluster: {Name:ha-900414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-900414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.156 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:27:45.855999   36531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 17:27:45.856053   36531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 17:27:45.893498   36531 cri.go:89] found id: "6e723c54d812a5be7eb854af81bf345c7429c93d2037e5c4ff4c3a74a418fae1"
	I0729 17:27:45.893522   36531 cri.go:89] found id: "e461901feb4562fc8c70f08a274772296164ae97b7fe43bb6573f9fe43ed0503"
	I0729 17:27:45.893526   36531 cri.go:89] found id: "e4d3e21e2fdd017f698d1d9d2ba208122c495a9c6273542bd16759ffc40e16a1"
	I0729 17:27:45.893529   36531 cri.go:89] found id: "7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127"
	I0729 17:27:45.893532   36531 cri.go:89] found id: "911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2"
	I0729 17:27:45.893534   36531 cri.go:89] found id: "b419192dc8add024f08c798a5f50d7c6bd2ee0ae8a2280771508aebc78e20217"
	I0729 17:27:45.893537   36531 cri.go:89] found id: "10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38"
	I0729 17:27:45.893539   36531 cri.go:89] found id: "37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9"
	I0729 17:27:45.893542   36531 cri.go:89] found id: "426b48b0fdbff14ce36fc0396074186cbd51533c984e6fac5f3f963bce611059"
	I0729 17:27:45.893546   36531 cri.go:89] found id: "a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf"
	I0729 17:27:45.893551   36531 cri.go:89] found id: "2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b"
	I0729 17:27:45.893554   36531 cri.go:89] found id: "270db6978c4e4bce98a1f424ce50f66507840c818ab639d9ef02e8f96bab41d6"
	I0729 17:27:45.893556   36531 cri.go:89] found id: "dd71b5556931becb81321807072e3a8100ce3344e4dea3237c6918a6c8e98cc5"
	I0729 17:27:45.893558   36531 cri.go:89] found id: ""
	I0729 17:27:45.893614   36531 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.252607677Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722274364252582643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0147aa62-ded0-4d10-a555-8281a610c4ce name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.253547420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4ef5cc1-06aa-4cdd-a160-12baea12d499 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.253624344Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4ef5cc1-06aa-4cdd-a160-12baea12d499 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.254088509Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c5a0872090b8a9407f0e240df38b013e8c3038b5e47a8999d2dcfe1a6a847b2,PodSandboxId:c007a83285af87cc3a8f7decb0251012dde13811d97b57d08206a31427d5c3a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722274110760146881,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:824354c7c16e9e8af404495001dc5d595a7cbc026c918156cf68ed850d7c19e8,PodSandboxId:cef075745bd3e13fa4553a22e0e8749baa40e9fecf2b65e77c60ddf312c1ed5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722274109764609958,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237f4f9a22aab5537900193f82504f315f0a18522b4ec147a18810a7207e9d03,PodSandboxId:0104a45e7598e30bdbc9acd393e17b4589234866ffc4e4c6f0deb5f2b179f696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722274108761219032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6e4cdaa36f1850f6ade39480251a5593c0369b0410bccb53957d0d832f43e7,PodSandboxId:d12022815679b303c38ea086e2a76f39ec3d47f7ae40b748cf15ab6bb1fe964e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722274105028099226,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annotations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97477efe48afec6ed188f1d10e74857abdbc4a945712c05e01ae55c1f48fb38d,PodSandboxId:58018117a81ff794019dc4556482350f7196d6ed56994875b706a1aa9e5d434d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722274086072630131,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e529ef1ae527634e8684df95d99942df,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fcbfdd88d8ba74c5d1c8a633f19a95728f6d2649e20c14873f44a9a83cb5fb,PodSandboxId:c007a83285af87cc3a8f7decb0251012dde13811d97b57d08206a31427d5c3a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722274072296491519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74b6fc801cbb9e29df38fb778b6af9db3ab8950818cd18c03383e749fc4190a,PodSandboxId:d8ce74c22f4e2c9339fc50e8b032b204d4c56cce3332942283790ff88fa71d3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722274071856460173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:478caa49a9e2e5f11ac1dfc8c6e870c29f3045b994c601e6cd646952b9c0de2f,PodSandboxId:e37d55989869ca3baab53df8ebf4b721d60ede534281c48e3ff8c5b14c8d46b5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722274071924188035,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a269a3
6cb51b460d86ff16520ffadbbd810606203c8a998252609f5a40452b,PodSandboxId:a863620823cc108fa12a70ef138b83fca0cf1c2f8e92863ae0a3b769dbf738d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274071976348453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96142230a5d1570c505a5127e3d4b0025e6c120c808eec6b1579291d9de14bb9,PodSandboxId:cef075745bd3e13fa4553a22e0e8749baa40e9fecf2b65e77c60ddf312c1ed5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722274071742708802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d199d063086572e53c3056fe3c36873a53cd3fe0bc1d0c796366f2c85d8b47,PodSandboxId:48bcddc3f018c08cbdf46edb4557396d8b25dfe6016f192534afa0c5a51328a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722274071709164105,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e76e9ddc1762b284dde16c55310d2e5005380c7b7522aa4c92d164afc32b292,PodSandboxId:528c72454cdd017233d33e8fd2f875f1ca0d26df629c50c50451e523df5851c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274071702640533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b08de2e88b41b61b1403daba370b669abfd1acee1793945733079da7004a6e,PodSandboxId:f69bbf70d4ce08eb7b420abc1362fb6f668ff031bb536ef59f5248de912ee3fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722274071580225499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a3
2c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d511eeca56c8ff8ccbb88e762a12ef7258f1c2175101320dda0553e82887c297,PodSandboxId:0104a45e7598e30bdbc9acd393e17b4589234866ffc4e4c6f0deb5f2b179f696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722274071565576136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174e5d31268c70a798e1fa1fe5d2845d98eaed228a11b55810b7ca4680256a8e,PodSandboxId:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722273570293260741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annot
ations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127,PodSandboxId:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722273433299033423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kube
rnetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2,PodSandboxId:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722273433248658511,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38,PodSandboxId:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722273421363258619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9,PodSandboxId:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722273417715030224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf,PodSandboxId:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722273397639748438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b,PodSandboxId:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722273397620067180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4ef5cc1-06aa-4cdd-a160-12baea12d499 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.298044466Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9d0e4eb-6c2b-4fe5-aaa3-153b43fb099d name=/runtime.v1.RuntimeService/Version
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.298134351Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9d0e4eb-6c2b-4fe5-aaa3-153b43fb099d name=/runtime.v1.RuntimeService/Version
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.299860280Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4c106b1-e1d5-484d-b6e7-7728776c386a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.300525282Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722274364300503050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4c106b1-e1d5-484d-b6e7-7728776c386a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.301192457Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5862e03-9765-419f-b6d2-4256e42959a4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.301262817Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5862e03-9765-419f-b6d2-4256e42959a4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.301696253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c5a0872090b8a9407f0e240df38b013e8c3038b5e47a8999d2dcfe1a6a847b2,PodSandboxId:c007a83285af87cc3a8f7decb0251012dde13811d97b57d08206a31427d5c3a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722274110760146881,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:824354c7c16e9e8af404495001dc5d595a7cbc026c918156cf68ed850d7c19e8,PodSandboxId:cef075745bd3e13fa4553a22e0e8749baa40e9fecf2b65e77c60ddf312c1ed5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722274109764609958,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237f4f9a22aab5537900193f82504f315f0a18522b4ec147a18810a7207e9d03,PodSandboxId:0104a45e7598e30bdbc9acd393e17b4589234866ffc4e4c6f0deb5f2b179f696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722274108761219032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6e4cdaa36f1850f6ade39480251a5593c0369b0410bccb53957d0d832f43e7,PodSandboxId:d12022815679b303c38ea086e2a76f39ec3d47f7ae40b748cf15ab6bb1fe964e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722274105028099226,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annotations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97477efe48afec6ed188f1d10e74857abdbc4a945712c05e01ae55c1f48fb38d,PodSandboxId:58018117a81ff794019dc4556482350f7196d6ed56994875b706a1aa9e5d434d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722274086072630131,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e529ef1ae527634e8684df95d99942df,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fcbfdd88d8ba74c5d1c8a633f19a95728f6d2649e20c14873f44a9a83cb5fb,PodSandboxId:c007a83285af87cc3a8f7decb0251012dde13811d97b57d08206a31427d5c3a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722274072296491519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74b6fc801cbb9e29df38fb778b6af9db3ab8950818cd18c03383e749fc4190a,PodSandboxId:d8ce74c22f4e2c9339fc50e8b032b204d4c56cce3332942283790ff88fa71d3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722274071856460173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:478caa49a9e2e5f11ac1dfc8c6e870c29f3045b994c601e6cd646952b9c0de2f,PodSandboxId:e37d55989869ca3baab53df8ebf4b721d60ede534281c48e3ff8c5b14c8d46b5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722274071924188035,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a269a3
6cb51b460d86ff16520ffadbbd810606203c8a998252609f5a40452b,PodSandboxId:a863620823cc108fa12a70ef138b83fca0cf1c2f8e92863ae0a3b769dbf738d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274071976348453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96142230a5d1570c505a5127e3d4b0025e6c120c808eec6b1579291d9de14bb9,PodSandboxId:cef075745bd3e13fa4553a22e0e8749baa40e9fecf2b65e77c60ddf312c1ed5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722274071742708802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d199d063086572e53c3056fe3c36873a53cd3fe0bc1d0c796366f2c85d8b47,PodSandboxId:48bcddc3f018c08cbdf46edb4557396d8b25dfe6016f192534afa0c5a51328a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722274071709164105,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e76e9ddc1762b284dde16c55310d2e5005380c7b7522aa4c92d164afc32b292,PodSandboxId:528c72454cdd017233d33e8fd2f875f1ca0d26df629c50c50451e523df5851c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274071702640533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b08de2e88b41b61b1403daba370b669abfd1acee1793945733079da7004a6e,PodSandboxId:f69bbf70d4ce08eb7b420abc1362fb6f668ff031bb536ef59f5248de912ee3fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722274071580225499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a3
2c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d511eeca56c8ff8ccbb88e762a12ef7258f1c2175101320dda0553e82887c297,PodSandboxId:0104a45e7598e30bdbc9acd393e17b4589234866ffc4e4c6f0deb5f2b179f696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722274071565576136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174e5d31268c70a798e1fa1fe5d2845d98eaed228a11b55810b7ca4680256a8e,PodSandboxId:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722273570293260741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annot
ations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127,PodSandboxId:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722273433299033423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kube
rnetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2,PodSandboxId:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722273433248658511,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38,PodSandboxId:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722273421363258619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9,PodSandboxId:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722273417715030224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf,PodSandboxId:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722273397639748438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b,PodSandboxId:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722273397620067180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5862e03-9765-419f-b6d2-4256e42959a4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.347144610Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=efdb5481-0cdd-404d-bb9e-53c9595e9a3f name=/runtime.v1.RuntimeService/Version
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.347236682Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=efdb5481-0cdd-404d-bb9e-53c9595e9a3f name=/runtime.v1.RuntimeService/Version
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.350566016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e610e7f-e54c-42ff-85ff-510d95564bc9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.354867850Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722274364354840417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e610e7f-e54c-42ff-85ff-510d95564bc9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.359246438Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7409f79c-e1f4-42a3-97cc-ca0d75192264 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.359539092Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7409f79c-e1f4-42a3-97cc-ca0d75192264 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.360442295Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c5a0872090b8a9407f0e240df38b013e8c3038b5e47a8999d2dcfe1a6a847b2,PodSandboxId:c007a83285af87cc3a8f7decb0251012dde13811d97b57d08206a31427d5c3a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722274110760146881,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:824354c7c16e9e8af404495001dc5d595a7cbc026c918156cf68ed850d7c19e8,PodSandboxId:cef075745bd3e13fa4553a22e0e8749baa40e9fecf2b65e77c60ddf312c1ed5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722274109764609958,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237f4f9a22aab5537900193f82504f315f0a18522b4ec147a18810a7207e9d03,PodSandboxId:0104a45e7598e30bdbc9acd393e17b4589234866ffc4e4c6f0deb5f2b179f696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722274108761219032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6e4cdaa36f1850f6ade39480251a5593c0369b0410bccb53957d0d832f43e7,PodSandboxId:d12022815679b303c38ea086e2a76f39ec3d47f7ae40b748cf15ab6bb1fe964e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722274105028099226,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annotations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97477efe48afec6ed188f1d10e74857abdbc4a945712c05e01ae55c1f48fb38d,PodSandboxId:58018117a81ff794019dc4556482350f7196d6ed56994875b706a1aa9e5d434d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722274086072630131,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e529ef1ae527634e8684df95d99942df,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fcbfdd88d8ba74c5d1c8a633f19a95728f6d2649e20c14873f44a9a83cb5fb,PodSandboxId:c007a83285af87cc3a8f7decb0251012dde13811d97b57d08206a31427d5c3a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722274072296491519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74b6fc801cbb9e29df38fb778b6af9db3ab8950818cd18c03383e749fc4190a,PodSandboxId:d8ce74c22f4e2c9339fc50e8b032b204d4c56cce3332942283790ff88fa71d3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722274071856460173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:478caa49a9e2e5f11ac1dfc8c6e870c29f3045b994c601e6cd646952b9c0de2f,PodSandboxId:e37d55989869ca3baab53df8ebf4b721d60ede534281c48e3ff8c5b14c8d46b5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722274071924188035,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a269a3
6cb51b460d86ff16520ffadbbd810606203c8a998252609f5a40452b,PodSandboxId:a863620823cc108fa12a70ef138b83fca0cf1c2f8e92863ae0a3b769dbf738d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274071976348453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96142230a5d1570c505a5127e3d4b0025e6c120c808eec6b1579291d9de14bb9,PodSandboxId:cef075745bd3e13fa4553a22e0e8749baa40e9fecf2b65e77c60ddf312c1ed5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722274071742708802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d199d063086572e53c3056fe3c36873a53cd3fe0bc1d0c796366f2c85d8b47,PodSandboxId:48bcddc3f018c08cbdf46edb4557396d8b25dfe6016f192534afa0c5a51328a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722274071709164105,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e76e9ddc1762b284dde16c55310d2e5005380c7b7522aa4c92d164afc32b292,PodSandboxId:528c72454cdd017233d33e8fd2f875f1ca0d26df629c50c50451e523df5851c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274071702640533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b08de2e88b41b61b1403daba370b669abfd1acee1793945733079da7004a6e,PodSandboxId:f69bbf70d4ce08eb7b420abc1362fb6f668ff031bb536ef59f5248de912ee3fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722274071580225499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a3
2c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d511eeca56c8ff8ccbb88e762a12ef7258f1c2175101320dda0553e82887c297,PodSandboxId:0104a45e7598e30bdbc9acd393e17b4589234866ffc4e4c6f0deb5f2b179f696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722274071565576136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174e5d31268c70a798e1fa1fe5d2845d98eaed228a11b55810b7ca4680256a8e,PodSandboxId:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722273570293260741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annot
ations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127,PodSandboxId:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722273433299033423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kube
rnetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2,PodSandboxId:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722273433248658511,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38,PodSandboxId:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722273421363258619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9,PodSandboxId:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722273417715030224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf,PodSandboxId:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722273397639748438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b,PodSandboxId:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722273397620067180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7409f79c-e1f4-42a3-97cc-ca0d75192264 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.401285799Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4b570cd-698c-4d28-af6c-5ec52c7f04d6 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.401370940Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4b570cd-698c-4d28-af6c-5ec52c7f04d6 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.402582379Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d3096de-a31f-4672-809d-bfd4b4899b4a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.403273808Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722274364403248582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d3096de-a31f-4672-809d-bfd4b4899b4a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.403826322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aaedc1ea-fc84-40e7-930f-c87c92a1c290 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.403903844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aaedc1ea-fc84-40e7-930f-c87c92a1c290 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:32:44 ha-900414 crio[3755]: time="2024-07-29 17:32:44.404412629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c5a0872090b8a9407f0e240df38b013e8c3038b5e47a8999d2dcfe1a6a847b2,PodSandboxId:c007a83285af87cc3a8f7decb0251012dde13811d97b57d08206a31427d5c3a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722274110760146881,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:824354c7c16e9e8af404495001dc5d595a7cbc026c918156cf68ed850d7c19e8,PodSandboxId:cef075745bd3e13fa4553a22e0e8749baa40e9fecf2b65e77c60ddf312c1ed5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722274109764609958,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237f4f9a22aab5537900193f82504f315f0a18522b4ec147a18810a7207e9d03,PodSandboxId:0104a45e7598e30bdbc9acd393e17b4589234866ffc4e4c6f0deb5f2b179f696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722274108761219032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6e4cdaa36f1850f6ade39480251a5593c0369b0410bccb53957d0d832f43e7,PodSandboxId:d12022815679b303c38ea086e2a76f39ec3d47f7ae40b748cf15ab6bb1fe964e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722274105028099226,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annotations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97477efe48afec6ed188f1d10e74857abdbc4a945712c05e01ae55c1f48fb38d,PodSandboxId:58018117a81ff794019dc4556482350f7196d6ed56994875b706a1aa9e5d434d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722274086072630131,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e529ef1ae527634e8684df95d99942df,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fcbfdd88d8ba74c5d1c8a633f19a95728f6d2649e20c14873f44a9a83cb5fb,PodSandboxId:c007a83285af87cc3a8f7decb0251012dde13811d97b57d08206a31427d5c3a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722274072296491519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50fa96e8-1ee5-4e09-a734-802dbcd02bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1a126ce7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74b6fc801cbb9e29df38fb778b6af9db3ab8950818cd18c03383e749fc4190a,PodSandboxId:d8ce74c22f4e2c9339fc50e8b032b204d4c56cce3332942283790ff88fa71d3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722274071856460173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:478caa49a9e2e5f11ac1dfc8c6e870c29f3045b994c601e6cd646952b9c0de2f,PodSandboxId:e37d55989869ca3baab53df8ebf4b721d60ede534281c48e3ff8c5b14c8d46b5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722274071924188035,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a269a3
6cb51b460d86ff16520ffadbbd810606203c8a998252609f5a40452b,PodSandboxId:a863620823cc108fa12a70ef138b83fca0cf1c2f8e92863ae0a3b769dbf738d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274071976348453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96142230a5d1570c505a5127e3d4b0025e6c120c808eec6b1579291d9de14bb9,PodSandboxId:cef075745bd3e13fa4553a22e0e8749baa40e9fecf2b65e77c60ddf312c1ed5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722274071742708802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188869688c2292cb440067d4b4cfa9f3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d199d063086572e53c3056fe3c36873a53cd3fe0bc1d0c796366f2c85d8b47,PodSandboxId:48bcddc3f018c08cbdf46edb4557396d8b25dfe6016f192534afa0c5a51328a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722274071709164105,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e76e9ddc1762b284dde16c55310d2e5005380c7b7522aa4c92d164afc32b292,PodSandboxId:528c72454cdd017233d33e8fd2f875f1ca0d26df629c50c50451e523df5851c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274071702640533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b08de2e88b41b61b1403daba370b669abfd1acee1793945733079da7004a6e,PodSandboxId:f69bbf70d4ce08eb7b420abc1362fb6f668ff031bb536ef59f5248de912ee3fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722274071580225499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a3
2c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d511eeca56c8ff8ccbb88e762a12ef7258f1c2175101320dda0553e82887c297,PodSandboxId:0104a45e7598e30bdbc9acd393e17b4589234866ffc4e4c6f0deb5f2b179f696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722274071565576136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc461575e1c166c1aa8b00d38af205a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174e5d31268c70a798e1fa1fe5d2845d98eaed228a11b55810b7ca4680256a8e,PodSandboxId:7d2a64a5bcccdbfe3d1db48fd0a6231c01ec2f72f5944f5aa82835bdbbf8641b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722273570293260741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4fv4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc9aae4c-f622-4f0a-bdbc-66295d9c3dd6,},Annot
ations:map[string]string{io.kubernetes.container.hash: bbf9a5b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127,PodSandboxId:7a0bb58ad2b90a00cbfe5381a420068caf367d6d0a46d8bfa235680d9a9e383c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722273433299033423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9r87x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc4709f-f07b-4694-a352-aedd9c67bbb2,},Annotations:map[string]string{io.kube
rnetes.container.hash: a73c1fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2,PodSandboxId:f47facc78da61a96cbc7f88d068ff1130bdf82703fa98c5e773eba93b8000852,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722273433248658511,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-48j6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306fc091-c2cf-47d4-86a7-dbe1b2fbfa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 14f903c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38,PodSandboxId:30715fa1b9f024468de573f3e60b03860bdea65df505677b107723e5e7663d18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722273421363258619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z9cvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2177daa-4efb-478c-845f-f30e77e91684,},Annotations:map[string]string{io.kubernetes.container.hash: 7870c1dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9,PodSandboxId:250f31f0996e1b89f155a50b796cf5c3e03e4e621f62973dc2ca1b4547440256,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722273417715030224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tng4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2303269f-50d3-4a63-aa76-891f001e6f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e285077a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf,PodSandboxId:e2a054b42822ad7d37df60a69fdb759eb309b8ee40e4c712e2f7ae6a2aaa0e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722273397639748438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba63e9544003a32c61ae2cfa7bb117,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b,PodSandboxId:46030b1ba43cfae01b3b4a26ba23e19c1dade394973241fddfe9126def4aa597,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722273397620067180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-900414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c283b6b662036e086a0948631d339c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ec5252a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aaedc1ea-fc84-40e7-930f-c87c92a1c290 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4c5a0872090b8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       3                   c007a83285af8       storage-provisioner
	824354c7c16e9       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   cef075745bd3e       kube-controller-manager-ha-900414
	237f4f9a22aab       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   0104a45e7598e       kube-apiserver-ha-900414
	4b6e4cdaa36f1       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   d12022815679b       busybox-fc5497c4f-4fv4t
	97477efe48afe       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   58018117a81ff       kube-vip-ha-900414
	67fcbfdd88d8b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       2                   c007a83285af8       storage-provisioner
	a1a269a36cb51       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   a863620823cc1       coredns-7db6d8ff4d-48j6w
	478caa49a9e2e       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   e37d55989869c       kindnet-z9cvz
	f74b6fc801cbb       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   d8ce74c22f4e2       kube-proxy-tng4t
	96142230a5d15       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Exited              kube-controller-manager   1                   cef075745bd3e       kube-controller-manager-ha-900414
	96d199d063086       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   48bcddc3f018c       etcd-ha-900414
	5e76e9ddc1762       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   528c72454cdd0       coredns-7db6d8ff4d-9r87x
	c4b08de2e88b4       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   f69bbf70d4ce0       kube-scheduler-ha-900414
	d511eeca56c8f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Exited              kube-apiserver            2                   0104a45e7598e       kube-apiserver-ha-900414
	174e5d31268c7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   7d2a64a5bcccd       busybox-fc5497c4f-4fv4t
	7d7ffaf9ef2fd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   7a0bb58ad2b90       coredns-7db6d8ff4d-9r87x
	911569fe2373d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   f47facc78da61       coredns-7db6d8ff4d-48j6w
	10b182b72bc50       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    15 minutes ago      Exited              kindnet-cni               0                   30715fa1b9f02       kindnet-z9cvz
	37ef29620e9c9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      15 minutes ago      Exited              kube-proxy                0                   250f31f0996e1       kube-proxy-tng4t
	a7721018288f9       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   e2a054b42822a       kube-scheduler-ha-900414
	2a27f5a54bd43       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   46030b1ba43cf       etcd-ha-900414
	
	
	==> coredns [5e76e9ddc1762b284dde16c55310d2e5005380c7b7522aa4c92d164afc32b292] <==
	[INFO] plugin/kubernetes: Trace[920478214]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 17:27:53.778) (total time: 10001ms):
	Trace[920478214]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:28:03.779)
	Trace[920478214]: [10.001716395s] [10.001716395s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:41066->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:41066->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37802->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37802->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37800->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1146101281]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 17:28:06.495) (total time: 10266ms):
	Trace[1146101281]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37800->10.96.0.1:443: read: connection reset by peer 10266ms (17:28:16.761)
	Trace[1146101281]: [10.266321597s] [10.266321597s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37800->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [7d7ffaf9ef2fda3e8c5965888c0244dd20c8cdc30b4ed1c300c5f9de3a70a127] <==
	[INFO] 10.244.1.2:35645 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001855497s
	[INFO] 10.244.1.2:43192 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177303s
	[INFO] 10.244.1.2:33281 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160789s
	[INFO] 10.244.1.2:57013 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097416s
	[INFO] 10.244.2.2:38166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136029s
	[INFO] 10.244.2.2:33640 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001913014s
	[INFO] 10.244.2.2:47485 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104905s
	[INFO] 10.244.2.2:45778 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170534s
	[INFO] 10.244.2.2:59234 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076101s
	[INFO] 10.244.0.4:50535 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065536s
	[INFO] 10.244.1.2:58622 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133396s
	[INFO] 10.244.1.2:33438 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102338s
	[INFO] 10.244.2.2:45926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000383812s
	[INFO] 10.244.2.2:56980 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000187545s
	[INFO] 10.244.2.2:43137 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00016801s
	[INFO] 10.244.0.4:57612 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000159389s
	[INFO] 10.244.1.2:58047 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014126s
	[INFO] 10.244.1.2:45045 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123813s
	[INFO] 10.244.2.2:35311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173973s
	[INFO] 10.244.2.2:47044 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140928s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [911569fe2373d5193385d0fdcc98071bacd23c7de020ed4e2ab3a15a3793c2d2] <==
	[INFO] 10.244.0.4:43001 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165887s
	[INFO] 10.244.1.2:43677 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128129s
	[INFO] 10.244.1.2:39513 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001354968s
	[INFO] 10.244.1.2:52828 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183362s
	[INFO] 10.244.2.2:51403 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116578s
	[INFO] 10.244.2.2:47706 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001162998s
	[INFO] 10.244.2.2:39349 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083497s
	[INFO] 10.244.0.4:43643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164666s
	[INFO] 10.244.0.4:51941 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067553s
	[INFO] 10.244.0.4:33186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117492s
	[INFO] 10.244.1.2:36002 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170421s
	[INFO] 10.244.1.2:41186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135424s
	[INFO] 10.244.2.2:40469 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015464s
	[INFO] 10.244.0.4:58750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131521s
	[INFO] 10.244.0.4:59782 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141641s
	[INFO] 10.244.0.4:47289 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000189592s
	[INFO] 10.244.1.2:44743 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121922s
	[INFO] 10.244.1.2:60901 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099491s
	[INFO] 10.244.2.2:53612 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000143831s
	[INFO] 10.244.2.2:35693 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120049s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a1a269a36cb51b460d86ff16520ffadbbd810606203c8a998252609f5a40452b] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1545848923]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 17:28:00.396) (total time: 10001ms):
	Trace[1545848923]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:28:10.397)
	Trace[1545848923]: [10.001306039s] [10.001306039s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52238->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52238->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-900414
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-900414
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=ha-900414
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T17_16_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:16:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-900414
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:32:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:28:31 +0000   Mon, 29 Jul 2024 17:16:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:28:31 +0000   Mon, 29 Jul 2024 17:16:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:28:31 +0000   Mon, 29 Jul 2024 17:16:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:28:31 +0000   Mon, 29 Jul 2024 17:17:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    ha-900414
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0301ef966ab4d039cde4e4959e83ea6
	  System UUID:                d0301ef9-66ab-4d03-9cde-4e4959e83ea6
	  Boot ID:                    ea7d1983-2f49-4874-b67f-d8eea13c27d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4fv4t              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-48j6w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-9r87x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-900414                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-z9cvz                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-900414             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-900414    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-tng4t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-900414             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-900414                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 15m                  kube-proxy       
	  Normal   Starting                 4m11s                kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m                  kubelet          Node ha-900414 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                  kubelet          Node ha-900414 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                  kubelet          Node ha-900414 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                  node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	  Normal   NodeReady                15m                  kubelet          Node ha-900414 status is now: NodeReady
	  Normal   RegisteredNode           14m                  node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	  Normal   RegisteredNode           13m                  node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	  Warning  ContainerGCFailed        5m1s (x2 over 6m1s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m2s                 node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	  Normal   RegisteredNode           4m1s                 node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	  Normal   RegisteredNode           2m58s                node-controller  Node ha-900414 event: Registered Node ha-900414 in Controller
	
	
	Name:               ha-900414-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-900414-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=ha-900414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_17_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:17:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-900414-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:32:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:28:47 +0000   Mon, 29 Jul 2024 17:28:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:28:47 +0000   Mon, 29 Jul 2024 17:28:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:28:47 +0000   Mon, 29 Jul 2024 17:28:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:28:47 +0000   Mon, 29 Jul 2024 17:28:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    ha-900414-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 854b5d80a28944e1a0d7e90a65ef964f
	  System UUID:                854b5d80-a289-44e1-a0d7-e90a65ef964f
	  Boot ID:                    4c2eb12b-96bb-4615-a8c3-460c54019d18
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dqz55                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-900414-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-kdzhk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-900414-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-900414-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-bgq99                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-900414-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-900414-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                    node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-900414-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-900414-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-900414-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-900414-m02 status is now: NodeNotReady
	  Normal  Starting                 4m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m35s (x8 over 4m35s)  kubelet          Node ha-900414-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m35s (x8 over 4m35s)  kubelet          Node ha-900414-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m35s (x7 over 4m35s)  kubelet          Node ha-900414-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                   node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	  Normal  RegisteredNode           2m58s                  node-controller  Node ha-900414-m02 event: Registered Node ha-900414-m02 in Controller
	
	
	Name:               ha-900414-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-900414-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=ha-900414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_20_07_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:20:06 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-900414-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:30:17 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 17:29:57 +0000   Mon, 29 Jul 2024 17:30:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 17:29:57 +0000   Mon, 29 Jul 2024 17:30:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 17:29:57 +0000   Mon, 29 Jul 2024 17:30:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 17:29:57 +0000   Mon, 29 Jul 2024 17:30:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    ha-900414-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 82b534ad740b47cbae65e1e5acf41d9a
	  System UUID:                82b534ad-740b-47cb-ae65-e1e5acf41d9a
	  Boot ID:                    9322d5b7-493e-4e2b-ba3e-df1b1dd665c0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9pvc5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kindnet-4fsvj              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-hf5lx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x3 over 12m)      kubelet          Node ha-900414-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x3 over 12m)      kubelet          Node ha-900414-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x3 over 12m)      kubelet          Node ha-900414-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-900414-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m2s                   node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal   RegisteredNode           4m1s                   node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal   RegisteredNode           2m58s                  node-controller  Node ha-900414-m04 event: Registered Node ha-900414-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-900414-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-900414-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-900414-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-900414-m04 has been rebooted, boot id: 9322d5b7-493e-4e2b-ba3e-df1b1dd665c0
	  Normal   NodeReady                2m47s                  kubelet          Node ha-900414-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s (x2 over 3m22s)   node-controller  Node ha-900414-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.778112] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.061146] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062274] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.167660] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.152034] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.281641] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.241432] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +5.177057] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.055961] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.040447] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.087819] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.082632] kauditd_printk_skb: 21 callbacks suppressed
	[Jul29 17:17] kauditd_printk_skb: 38 callbacks suppressed
	[ +45.217420] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 17:27] systemd-fstab-generator[3668]: Ignoring "noauto" option for root device
	[  +0.158652] systemd-fstab-generator[3680]: Ignoring "noauto" option for root device
	[  +0.176057] systemd-fstab-generator[3699]: Ignoring "noauto" option for root device
	[  +0.143060] systemd-fstab-generator[3711]: Ignoring "noauto" option for root device
	[  +0.269167] systemd-fstab-generator[3739]: Ignoring "noauto" option for root device
	[  +0.807853] systemd-fstab-generator[3842]: Ignoring "noauto" option for root device
	[  +5.953703] kauditd_printk_skb: 122 callbacks suppressed
	[Jul29 17:28] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.070535] kauditd_printk_skb: 1 callbacks suppressed
	[ +17.340648] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [2a27f5a54bd43275313e419dabaa643ad1764f5cd10953333df1eea8a9a4bf1b] <==
	2024/07/29 17:26:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T17:26:12.206438Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"648.413457ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T17:26:12.206486Z","caller":"traceutil/trace.go:171","msg":"trace[558497648] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; }","duration":"648.586686ms","start":"2024-07-29T17:26:11.557892Z","end":"2024-07-29T17:26:12.206479Z","steps":["trace[558497648] 'agreement among raft nodes before linearized reading'  (duration: 648.53391ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T17:26:12.206515Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T17:26:11.557879Z","time spent":"648.623025ms","remote":"127.0.0.1:43202","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:500 "}
	2024/07/29 17:26:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T17:26:12.265193Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.114:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T17:26:12.265333Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.114:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T17:26:12.271164Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"7df1350fafd42bce","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T17:26:12.271305Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:26:12.271337Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:26:12.271362Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:26:12.27148Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7df1350fafd42bce","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:26:12.271533Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7df1350fafd42bce","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:26:12.27158Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7df1350fafd42bce","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:26:12.271591Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:26:12.271596Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d9be55a8daa69990"}
	{"level":"info","ts":"2024-07-29T17:26:12.271607Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d9be55a8daa69990"}
	{"level":"info","ts":"2024-07-29T17:26:12.271648Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d9be55a8daa69990"}
	{"level":"info","ts":"2024-07-29T17:26:12.271848Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990"}
	{"level":"info","ts":"2024-07-29T17:26:12.271889Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990"}
	{"level":"info","ts":"2024-07-29T17:26:12.271978Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7df1350fafd42bce","remote-peer-id":"d9be55a8daa69990"}
	{"level":"info","ts":"2024-07-29T17:26:12.272004Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d9be55a8daa69990"}
	{"level":"info","ts":"2024-07-29T17:26:12.274578Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.114:2380"}
	{"level":"info","ts":"2024-07-29T17:26:12.274706Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.114:2380"}
	{"level":"info","ts":"2024-07-29T17:26:12.274717Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-900414","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.114:2380"],"advertise-client-urls":["https://192.168.39.114:2379"]}
	
	
	==> etcd [96d199d063086572e53c3056fe3c36873a53cd3fe0bc1d0c796366f2c85d8b47] <==
	{"level":"info","ts":"2024-07-29T17:29:28.27474Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7df1350fafd42bce","remote-peer-id":"add4e939ac9b709a"}
	{"level":"warn","ts":"2024-07-29T17:30:02.699736Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.094094ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-wnfsb\" ","response":"range_response_count:1 size:4661"}
	{"level":"info","ts":"2024-07-29T17:30:02.700298Z","caller":"traceutil/trace.go:171","msg":"trace[902550932] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-wnfsb; range_end:; response_count:1; response_revision:2502; }","duration":"147.805095ms","start":"2024-07-29T17:30:02.552463Z","end":"2024-07-29T17:30:02.700268Z","steps":["trace[902550932] 'agreement among raft nodes before linearized reading'  (duration: 99.880067ms)","trace[902550932] 'range keys from in-memory index tree'  (duration: 47.149182ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T17:30:02.700058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.48159ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-07-29T17:30:02.700614Z","caller":"traceutil/trace.go:171","msg":"trace[1086273481] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2503; }","duration":"143.027834ms","start":"2024-07-29T17:30:02.557542Z","end":"2024-07-29T17:30:02.70057Z","steps":["trace[1086273481] 'agreement among raft nodes before linearized reading'  (duration: 142.361019ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:30:02.700118Z","caller":"traceutil/trace.go:171","msg":"trace[439385542] transaction","detail":"{read_only:false; response_revision:2503; number_of_response:1; }","duration":"151.903879ms","start":"2024-07-29T17:30:02.548197Z","end":"2024-07-29T17:30:02.700101Z","steps":["trace[439385542] 'process raft request'  (duration: 103.905744ms)","trace[439385542] 'compare'  (duration: 47.71552ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T17:30:10.83324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce switched to configuration voters=(9075093065618959310 15690072335516604816)"}
	{"level":"info","ts":"2024-07-29T17:30:10.839024Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"101f5850ef417740","local-member-id":"7df1350fafd42bce","removed-remote-peer-id":"add4e939ac9b709a","removed-remote-peer-urls":["https://192.168.39.6:2380"]}
	{"level":"info","ts":"2024-07-29T17:30:10.839146Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"add4e939ac9b709a"}
	{"level":"warn","ts":"2024-07-29T17:30:10.839223Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"7df1350fafd42bce","removed-member-id":"add4e939ac9b709a"}
	{"level":"warn","ts":"2024-07-29T17:30:10.839508Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-07-29T17:30:10.839601Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:30:10.839657Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"add4e939ac9b709a"}
	{"level":"warn","ts":"2024-07-29T17:30:10.840159Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:30:10.840284Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:30:10.840413Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7df1350fafd42bce","remote-peer-id":"add4e939ac9b709a"}
	{"level":"warn","ts":"2024-07-29T17:30:10.840668Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7df1350fafd42bce","remote-peer-id":"add4e939ac9b709a","error":"context canceled"}
	{"level":"warn","ts":"2024-07-29T17:30:10.840743Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"add4e939ac9b709a","error":"failed to read add4e939ac9b709a on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-29T17:30:10.840798Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7df1350fafd42bce","remote-peer-id":"add4e939ac9b709a"}
	{"level":"warn","ts":"2024-07-29T17:30:10.841129Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7df1350fafd42bce","remote-peer-id":"add4e939ac9b709a","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T17:30:10.841218Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7df1350fafd42bce","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:30:10.841259Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"add4e939ac9b709a"}
	{"level":"info","ts":"2024-07-29T17:30:10.841298Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"7df1350fafd42bce","removed-remote-peer-id":"add4e939ac9b709a"}
	{"level":"warn","ts":"2024-07-29T17:30:10.858688Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"7df1350fafd42bce","remote-peer-id-stream-handler":"7df1350fafd42bce","remote-peer-id-from":"add4e939ac9b709a"}
	{"level":"warn","ts":"2024-07-29T17:30:10.859229Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"7df1350fafd42bce","remote-peer-id-stream-handler":"7df1350fafd42bce","remote-peer-id-from":"add4e939ac9b709a"}
	
	
	==> kernel <==
	 17:32:45 up 16 min,  0 users,  load average: 1.19, 0.61, 0.35
	Linux ha-900414 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [10b182b72bc50740d9cc2e0ed8b5c1d4b8f58c58594cc462fc796a75ccce7d38] <==
	I0729 17:25:42.577057       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:25:42.577099       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:25:42.577245       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:25:42.577270       1 main.go:299] handling current node
	I0729 17:25:42.577299       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:25:42.577304       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:25:42.577366       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0729 17:25:42.577371       1 main.go:322] Node ha-900414-m03 has CIDR [10.244.2.0/24] 
	I0729 17:25:52.573504       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:25:52.573575       1 main.go:299] handling current node
	I0729 17:25:52.573595       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:25:52.573603       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:25:52.573764       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0729 17:25:52.573787       1 main.go:322] Node ha-900414-m03 has CIDR [10.244.2.0/24] 
	I0729 17:25:52.573844       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:25:52.573865       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:26:02.569975       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:26:02.570090       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:26:02.570315       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0729 17:26:02.570394       1 main.go:322] Node ha-900414-m03 has CIDR [10.244.2.0/24] 
	I0729 17:26:02.570490       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:26:02.570512       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:26:02.570585       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:26:02.570605       1 main.go:299] handling current node
	E0729 17:26:10.719217       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	
	
	==> kindnet [478caa49a9e2e5f11ac1dfc8c6e870c29f3045b994c601e6cd646952b9c0de2f] <==
	I0729 17:32:03.089701       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:32:13.082417       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:32:13.082509       1 main.go:299] handling current node
	I0729 17:32:13.082545       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:32:13.082551       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:32:13.082721       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:32:13.082743       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:32:23.086978       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:32:23.087100       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:32:23.087251       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:32:23.087377       1 main.go:299] handling current node
	I0729 17:32:23.087402       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:32:23.087420       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:32:33.091685       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:32:33.091754       1 main.go:299] handling current node
	I0729 17:32:33.091774       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:32:33.091782       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:32:33.092027       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:32:33.092059       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	I0729 17:32:43.083580       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0729 17:32:43.083639       1 main.go:299] handling current node
	I0729 17:32:43.083657       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0729 17:32:43.083665       1 main.go:322] Node ha-900414-m02 has CIDR [10.244.1.0/24] 
	I0729 17:32:43.083843       1 main.go:295] Handling node with IPs: map[192.168.39.156:{}]
	I0729 17:32:43.083866       1 main.go:322] Node ha-900414-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [237f4f9a22aab5537900193f82504f315f0a18522b4ec147a18810a7207e9d03] <==
	I0729 17:28:31.053573       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0729 17:28:31.053613       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0729 17:28:31.053645       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0729 17:28:31.133732       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 17:28:31.133881       1 policy_source.go:224] refreshing policies
	I0729 17:28:31.138630       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 17:28:31.147697       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 17:28:31.147803       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 17:28:31.148164       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 17:28:31.148193       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 17:28:31.148647       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 17:28:31.148702       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 17:28:31.148981       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 17:28:31.149200       1 aggregator.go:165] initial CRD sync complete...
	I0729 17:28:31.149273       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 17:28:31.149359       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 17:28:31.149421       1 cache.go:39] Caches are synced for autoregister controller
	I0729 17:28:31.155908       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0729 17:28:31.159534       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.111 192.168.39.6]
	I0729 17:28:31.160844       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 17:28:31.170378       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0729 17:28:31.190746       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0729 17:28:31.224315       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 17:28:32.063847       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 17:28:32.511551       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.111 192.168.39.114 192.168.39.6]
	
	
	==> kube-apiserver [d511eeca56c8ff8ccbb88e762a12ef7258f1c2175101320dda0553e82887c297] <==
	I0729 17:27:52.334061       1 options.go:221] external host was not specified, using 192.168.39.114
	I0729 17:27:52.347705       1 server.go:148] Version: v1.30.3
	I0729 17:27:52.347770       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:27:52.954753       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 17:27:52.963999       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 17:27:52.964128       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 17:27:52.964153       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 17:27:52.964319       1 instance.go:299] Using reconciler: lease
	W0729 17:28:12.952891       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0729 17:28:12.953414       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0729 17:28:12.965355       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0729 17:28:12.965355       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [824354c7c16e9e8af404495001dc5d595a7cbc026c918156cf68ed850d7c19e8] <==
	I0729 17:30:08.680407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="190.702µs"
	I0729 17:30:08.688403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.606µs"
	I0729 17:30:08.701480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.938µs"
	I0729 17:30:08.709383       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.4µs"
	I0729 17:30:08.716509       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.138µs"
	I0729 17:30:08.730986       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="364.654µs"
	I0729 17:30:09.554792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.19µs"
	I0729 17:30:09.841176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.559µs"
	I0729 17:30:09.879124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.768µs"
	I0729 17:30:09.887645       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.503µs"
	I0729 17:30:10.698528       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.194322ms"
	I0729 17:30:10.699594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.636µs"
	I0729 17:30:22.276478       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-900414-m04"
	E0729 17:30:23.488059       1 gc_controller.go:153] "Failed to get node" err="node \"ha-900414-m03\" not found" logger="pod-garbage-collector-controller" node="ha-900414-m03"
	E0729 17:30:23.488124       1 gc_controller.go:153] "Failed to get node" err="node \"ha-900414-m03\" not found" logger="pod-garbage-collector-controller" node="ha-900414-m03"
	E0729 17:30:23.488136       1 gc_controller.go:153] "Failed to get node" err="node \"ha-900414-m03\" not found" logger="pod-garbage-collector-controller" node="ha-900414-m03"
	E0729 17:30:23.488143       1 gc_controller.go:153] "Failed to get node" err="node \"ha-900414-m03\" not found" logger="pod-garbage-collector-controller" node="ha-900414-m03"
	E0729 17:30:23.488150       1 gc_controller.go:153] "Failed to get node" err="node \"ha-900414-m03\" not found" logger="pod-garbage-collector-controller" node="ha-900414-m03"
	E0729 17:30:43.489074       1 gc_controller.go:153] "Failed to get node" err="node \"ha-900414-m03\" not found" logger="pod-garbage-collector-controller" node="ha-900414-m03"
	E0729 17:30:43.489121       1 gc_controller.go:153] "Failed to get node" err="node \"ha-900414-m03\" not found" logger="pod-garbage-collector-controller" node="ha-900414-m03"
	E0729 17:30:43.489128       1 gc_controller.go:153] "Failed to get node" err="node \"ha-900414-m03\" not found" logger="pod-garbage-collector-controller" node="ha-900414-m03"
	E0729 17:30:43.489133       1 gc_controller.go:153] "Failed to get node" err="node \"ha-900414-m03\" not found" logger="pod-garbage-collector-controller" node="ha-900414-m03"
	E0729 17:30:43.489138       1 gc_controller.go:153] "Failed to get node" err="node \"ha-900414-m03\" not found" logger="pod-garbage-collector-controller" node="ha-900414-m03"
	I0729 17:30:57.635470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.284545ms"
	I0729 17:30:57.636080       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.563µs"
	
	
	==> kube-controller-manager [96142230a5d1570c505a5127e3d4b0025e6c120c808eec6b1579291d9de14bb9] <==
	I0729 17:27:52.856341       1 serving.go:380] Generated self-signed cert in-memory
	I0729 17:27:53.462620       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 17:27:53.462675       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:27:53.464454       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 17:27:53.464598       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 17:27:53.464791       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 17:27:53.465036       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 17:28:13.973747       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.114:8443/healthz\": dial tcp 192.168.39.114:8443: connect: connection refused"
	
	
	==> kube-proxy [37ef29620e9c9670549fa7741de5956157c7a03728d417b46b44a7b1abbf2ce9] <==
	E0729 17:24:54.585578       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:24:54.585371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-900414&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:24:54.585614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-900414&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:01.625399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:01.625470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:01.625403       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-900414&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:01.625507       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-900414&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:01.625762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:01.625901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:10.842335       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:10.842731       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:13.913501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:13.913554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:13.913619       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-900414&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:13.913636       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-900414&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:32.346213       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:32.346333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:32.346598       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:32.346651       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:25:32.346736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-900414&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:25:32.346792       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-900414&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:26:03.066557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:26:03.066665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:26:09.210117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:26:09.210672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1852": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [f74b6fc801cbb9e29df38fb778b6af9db3ab8950818cd18c03383e749fc4190a] <==
	I0729 17:27:53.490852       1 server_linux.go:69] "Using iptables proxy"
	E0729 17:27:53.659567       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-900414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 17:27:56.730026       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-900414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 17:27:59.801662       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-900414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 17:28:05.945581       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-900414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 17:28:15.162464       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-900414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0729 17:28:33.411661       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.114"]
	I0729 17:28:33.453166       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 17:28:33.453320       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 17:28:33.453352       1 server_linux.go:165] "Using iptables Proxier"
	I0729 17:28:33.457070       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 17:28:33.458044       1 server.go:872] "Version info" version="v1.30.3"
	I0729 17:28:33.458154       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:28:33.460518       1 config.go:192] "Starting service config controller"
	I0729 17:28:33.460589       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 17:28:33.460640       1 config.go:101] "Starting endpoint slice config controller"
	I0729 17:28:33.460657       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 17:28:33.462595       1 config.go:319] "Starting node config controller"
	I0729 17:28:33.463414       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 17:28:33.560906       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 17:28:33.561011       1 shared_informer.go:320] Caches are synced for service config
	I0729 17:28:33.564167       1 shared_informer.go:320] Caches are synced for node config
	W0729 17:31:17.754518       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0729 17:31:17.754543       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0729 17:31:17.754518       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [a7721018288f905547c9c059b6453a96e4c74f3573058e88425444162b255edf] <==
	W0729 17:26:07.794760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 17:26:07.794850       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 17:26:07.902269       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 17:26:07.902400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 17:26:08.011995       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 17:26:08.012048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 17:26:08.818148       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 17:26:08.818276       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 17:26:09.071257       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 17:26:09.071347       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 17:26:09.167572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 17:26:09.167662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 17:26:09.319179       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 17:26:09.319230       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 17:26:09.319355       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 17:26:09.319390       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 17:26:09.477983       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 17:26:09.478081       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 17:26:09.628764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 17:26:09.628813       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 17:26:10.398031       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 17:26:10.398123       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0729 17:26:12.176222       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 17:26:12.200716       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0729 17:26:12.201651       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c4b08de2e88b41b61b1403daba370b669abfd1acee1793945733079da7004a6e] <==
	W0729 17:28:22.567262       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.114:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:22.567374       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.114:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:22.715419       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.114:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:22.715457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.114:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:23.024540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.114:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:23.024601       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.114:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:23.317883       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.114:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:23.318040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.114:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:23.673885       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.114:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:23.673997       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.114:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:28.280734       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.114:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:28.280857       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.114:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:28.441902       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.114:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:28.442177       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.114:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:28.629372       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.114:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:28.629524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.114:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:28.822131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.114:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	E0729 17:28:28.822245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.114:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.114:8443: connect: connection refused
	W0729 17:28:31.099378       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 17:28:31.099428       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0729 17:28:53.978118       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 17:30:08.698879       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-9pvc5\": pod busybox-fc5497c4f-9pvc5 is already assigned to node \"ha-900414-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-9pvc5" node="ha-900414-m04"
	E0729 17:30:08.699153       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod d98e1299-de40-4d0f-b4e3-fd1832b40a64(default/busybox-fc5497c4f-9pvc5) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-9pvc5"
	E0729 17:30:08.699194       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-9pvc5\": pod busybox-fc5497c4f-9pvc5 is already assigned to node \"ha-900414-m04\"" pod="default/busybox-fc5497c4f-9pvc5"
	I0729 17:30:08.699218       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-9pvc5" node="ha-900414-m04"
	
	
	==> kubelet <==
	Jul 29 17:28:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:28:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:28:44 ha-900414 kubelet[1377]: I0729 17:28:44.165389    1377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-4fv4t" podStartSLOduration=555.272981287 podStartE2EDuration="9m16.165332205s" podCreationTimestamp="2024-07-29 17:19:28 +0000 UTC" firstStartedPulling="2024-07-29 17:19:29.383688696 +0000 UTC m=+165.813501167" lastFinishedPulling="2024-07-29 17:19:30.276039603 +0000 UTC m=+166.705852085" observedRunningTime="2024-07-29 17:19:30.501152176 +0000 UTC m=+166.930964668" watchObservedRunningTime="2024-07-29 17:28:44.165332205 +0000 UTC m=+720.595144694"
	Jul 29 17:29:12 ha-900414 kubelet[1377]: I0729 17:29:12.740862    1377 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-900414" podUID="bf3918b4-6cc5-499b-808e-b6c33138cae2"
	Jul 29 17:29:12 ha-900414 kubelet[1377]: I0729 17:29:12.765878    1377 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-900414"
	Jul 29 17:29:43 ha-900414 kubelet[1377]: E0729 17:29:43.788070    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:29:43 ha-900414 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:29:43 ha-900414 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:29:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:29:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:30:43 ha-900414 kubelet[1377]: E0729 17:30:43.786460    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:30:43 ha-900414 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:30:43 ha-900414 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:30:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:30:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:31:43 ha-900414 kubelet[1377]: E0729 17:31:43.785549    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:31:43 ha-900414 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:31:43 ha-900414 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:31:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:31:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:32:43 ha-900414 kubelet[1377]: E0729 17:32:43.792301    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:32:43 ha-900414 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:32:43 ha-900414 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:32:43 ha-900414 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:32:43 ha-900414 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 17:32:43.962883   38762 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19345-11206/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-900414 -n ha-900414
helpers_test.go:261: (dbg) Run:  kubectl --context ha-900414 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (322.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-602258
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-602258
E0729 17:48:29.676912   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:49:55.949863   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-602258: exit status 82 (2m1.834575879s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-602258-m03"  ...
	* Stopping node "multinode-602258-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-602258" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-602258 --wait=true -v=8 --alsologtostderr
E0729 17:51:52.902382   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:53:29.677260   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-602258 --wait=true -v=8 --alsologtostderr: (3m18.092645717s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-602258
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-602258 -n multinode-602258
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-602258 logs -n 25: (1.429860337s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-602258 ssh -n                                                                 | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-602258 cp multinode-602258-m02:/home/docker/cp-test.txt                       | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile669002766/001/cp-test_multinode-602258-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n                                                                 | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-602258 cp multinode-602258-m02:/home/docker/cp-test.txt                       | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258:/home/docker/cp-test_multinode-602258-m02_multinode-602258.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n                                                                 | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n multinode-602258 sudo cat                                       | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-602258-m02_multinode-602258.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-602258 cp multinode-602258-m02:/home/docker/cp-test.txt                       | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m03:/home/docker/cp-test_multinode-602258-m02_multinode-602258-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n                                                                 | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n multinode-602258-m03 sudo cat                                   | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-602258-m02_multinode-602258-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-602258 cp testdata/cp-test.txt                                                | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n                                                                 | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-602258 cp multinode-602258-m03:/home/docker/cp-test.txt                       | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile669002766/001/cp-test_multinode-602258-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n                                                                 | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-602258 cp multinode-602258-m03:/home/docker/cp-test.txt                       | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258:/home/docker/cp-test_multinode-602258-m03_multinode-602258.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n                                                                 | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n multinode-602258 sudo cat                                       | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-602258-m03_multinode-602258.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-602258 cp multinode-602258-m03:/home/docker/cp-test.txt                       | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m02:/home/docker/cp-test_multinode-602258-m03_multinode-602258-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n                                                                 | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n multinode-602258-m02 sudo cat                                   | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-602258-m03_multinode-602258-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-602258 node stop m03                                                          | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	| node    | multinode-602258 node start                                                             | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:48 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-602258                                                                | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:48 UTC |                     |
	| stop    | -p multinode-602258                                                                     | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:48 UTC |                     |
	| start   | -p multinode-602258                                                                     | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:50 UTC | 29 Jul 24 17:53 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-602258                                                                | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 17:50:21
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 17:50:21.239950   48198 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:50:21.240217   48198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:50:21.240226   48198 out.go:304] Setting ErrFile to fd 2...
	I0729 17:50:21.240230   48198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:50:21.240406   48198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:50:21.240957   48198 out.go:298] Setting JSON to false
	I0729 17:50:21.241946   48198 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5573,"bootTime":1722269848,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:50:21.242005   48198 start.go:139] virtualization: kvm guest
	I0729 17:50:21.244304   48198 out.go:177] * [multinode-602258] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 17:50:21.245656   48198 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 17:50:21.245695   48198 notify.go:220] Checking for updates...
	I0729 17:50:21.247758   48198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:50:21.248957   48198 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 17:50:21.250089   48198 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:50:21.251303   48198 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 17:50:21.252390   48198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:50:21.253873   48198 config.go:182] Loaded profile config "multinode-602258": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:50:21.253963   48198 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:50:21.254380   48198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:21.254439   48198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:21.269819   48198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45475
	I0729 17:50:21.270306   48198 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:21.270842   48198 main.go:141] libmachine: Using API Version  1
	I0729 17:50:21.270857   48198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:21.271233   48198 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:21.271402   48198 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:50:21.306924   48198 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 17:50:21.308157   48198 start.go:297] selected driver: kvm2
	I0729 17:50:21.308176   48198 start.go:901] validating driver "kvm2" against &{Name:multinode-602258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-602258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:50:21.308345   48198 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:50:21.308686   48198 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:50:21.308773   48198 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 17:50:21.323725   48198 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 17:50:21.324453   48198 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:50:21.324516   48198 cni.go:84] Creating CNI manager for ""
	I0729 17:50:21.324536   48198 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 17:50:21.324606   48198 start.go:340] cluster config:
	{Name:multinode-602258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-602258 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:50:21.324751   48198 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:50:21.326654   48198 out.go:177] * Starting "multinode-602258" primary control-plane node in "multinode-602258" cluster
	I0729 17:50:21.327834   48198 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:50:21.327867   48198 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 17:50:21.327879   48198 cache.go:56] Caching tarball of preloaded images
	I0729 17:50:21.327962   48198 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:50:21.327974   48198 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:50:21.328132   48198 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/config.json ...
	I0729 17:50:21.328362   48198 start.go:360] acquireMachinesLock for multinode-602258: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:50:21.328424   48198 start.go:364] duration metric: took 42.22µs to acquireMachinesLock for "multinode-602258"
	I0729 17:50:21.328446   48198 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:50:21.328457   48198 fix.go:54] fixHost starting: 
	I0729 17:50:21.328702   48198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:21.328737   48198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:21.343331   48198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I0729 17:50:21.343801   48198 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:21.344242   48198 main.go:141] libmachine: Using API Version  1
	I0729 17:50:21.344263   48198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:21.344601   48198 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:21.344818   48198 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:50:21.344994   48198 main.go:141] libmachine: (multinode-602258) Calling .GetState
	I0729 17:50:21.346803   48198 fix.go:112] recreateIfNeeded on multinode-602258: state=Running err=<nil>
	W0729 17:50:21.346822   48198 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:50:21.348828   48198 out.go:177] * Updating the running kvm2 "multinode-602258" VM ...
	I0729 17:50:21.350021   48198 machine.go:94] provisionDockerMachine start ...
	I0729 17:50:21.350047   48198 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:50:21.350272   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:50:21.352838   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.353377   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:50:21.353404   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.353593   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:50:21.353793   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.353963   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.354114   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:50:21.354275   48198 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:21.354472   48198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0729 17:50:21.354485   48198 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 17:50:21.464516   48198 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-602258
	
	I0729 17:50:21.464542   48198 main.go:141] libmachine: (multinode-602258) Calling .GetMachineName
	I0729 17:50:21.464830   48198 buildroot.go:166] provisioning hostname "multinode-602258"
	I0729 17:50:21.464861   48198 main.go:141] libmachine: (multinode-602258) Calling .GetMachineName
	I0729 17:50:21.465073   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:50:21.467735   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.468102   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:50:21.468146   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.468240   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:50:21.468404   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.468546   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.468678   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:50:21.468848   48198 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:21.469011   48198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0729 17:50:21.469023   48198 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-602258 && echo "multinode-602258" | sudo tee /etc/hostname
	I0729 17:50:21.592791   48198 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-602258
	
	I0729 17:50:21.592814   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:50:21.595934   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.596356   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:50:21.596386   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.596544   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:50:21.596717   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.596870   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.596998   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:50:21.597116   48198 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:21.597323   48198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0729 17:50:21.597340   48198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-602258' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-602258/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-602258' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:50:21.703496   48198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:50:21.703536   48198 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 17:50:21.703581   48198 buildroot.go:174] setting up certificates
	I0729 17:50:21.703592   48198 provision.go:84] configureAuth start
	I0729 17:50:21.703610   48198 main.go:141] libmachine: (multinode-602258) Calling .GetMachineName
	I0729 17:50:21.703918   48198 main.go:141] libmachine: (multinode-602258) Calling .GetIP
	I0729 17:50:21.706519   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.706826   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:50:21.706869   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.707021   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:50:21.709179   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.709609   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:50:21.709648   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.709716   48198 provision.go:143] copyHostCerts
	I0729 17:50:21.709745   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:50:21.709782   48198 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 17:50:21.709799   48198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:50:21.709877   48198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 17:50:21.709968   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:50:21.709993   48198 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 17:50:21.710000   48198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:50:21.710041   48198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 17:50:21.710109   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:50:21.710132   48198 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 17:50:21.710138   48198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:50:21.710177   48198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 17:50:21.710242   48198 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.multinode-602258 san=[127.0.0.1 192.168.39.218 localhost minikube multinode-602258]
	I0729 17:50:21.786991   48198 provision.go:177] copyRemoteCerts
	I0729 17:50:21.787066   48198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:50:21.787102   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:50:21.789741   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.790085   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:50:21.790110   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.790337   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:50:21.790507   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.790661   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:50:21.790818   48198 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/multinode-602258/id_rsa Username:docker}
	I0729 17:50:21.873710   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:50:21.873812   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 17:50:21.900693   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:50:21.900771   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 17:50:21.926847   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:50:21.926919   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:50:21.951119   48198 provision.go:87] duration metric: took 247.510328ms to configureAuth
	I0729 17:50:21.951156   48198 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:50:21.951441   48198 config.go:182] Loaded profile config "multinode-602258": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:50:21.951514   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:50:21.954053   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.954460   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:50:21.954480   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.954684   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:50:21.954849   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.955036   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.955226   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:50:21.955365   48198 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:21.955536   48198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0729 17:50:21.955550   48198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:51:52.854508   48198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:51:52.854533   48198 machine.go:97] duration metric: took 1m31.504498331s to provisionDockerMachine
	I0729 17:51:52.854546   48198 start.go:293] postStartSetup for "multinode-602258" (driver="kvm2")
	I0729 17:51:52.854556   48198 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:51:52.854571   48198 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:51:52.854939   48198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:51:52.854981   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:51:52.857904   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:52.858296   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:51:52.858316   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:52.858486   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:51:52.858668   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:51:52.858844   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:51:52.858989   48198 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/multinode-602258/id_rsa Username:docker}
	I0729 17:51:52.942387   48198 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:51:52.946491   48198 command_runner.go:130] > NAME=Buildroot
	I0729 17:51:52.946509   48198 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 17:51:52.946513   48198 command_runner.go:130] > ID=buildroot
	I0729 17:51:52.946518   48198 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 17:51:52.946523   48198 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 17:51:52.946582   48198 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:51:52.946606   48198 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 17:51:52.946669   48198 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 17:51:52.946743   48198 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 17:51:52.946756   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /etc/ssl/certs/183932.pem
	I0729 17:51:52.946837   48198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:51:52.956704   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:51:52.981025   48198 start.go:296] duration metric: took 126.466099ms for postStartSetup
	I0729 17:51:52.981108   48198 fix.go:56] duration metric: took 1m31.652649258s for fixHost
	I0729 17:51:52.981133   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:51:52.983919   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:52.984269   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:51:52.984290   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:52.984452   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:51:52.984652   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:51:52.984811   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:51:52.984956   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:51:52.985105   48198 main.go:141] libmachine: Using SSH client type: native
	I0729 17:51:52.985267   48198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0729 17:51:52.985276   48198 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:51:53.091125   48198 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722275513.058323927
	
	I0729 17:51:53.091161   48198 fix.go:216] guest clock: 1722275513.058323927
	I0729 17:51:53.091171   48198 fix.go:229] Guest: 2024-07-29 17:51:53.058323927 +0000 UTC Remote: 2024-07-29 17:51:52.981114501 +0000 UTC m=+91.775679826 (delta=77.209426ms)
	I0729 17:51:53.091232   48198 fix.go:200] guest clock delta is within tolerance: 77.209426ms
	I0729 17:51:53.091240   48198 start.go:83] releasing machines lock for "multinode-602258", held for 1m31.762805153s
	I0729 17:51:53.091271   48198 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:51:53.091545   48198 main.go:141] libmachine: (multinode-602258) Calling .GetIP
	I0729 17:51:53.094060   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:53.094385   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:51:53.094413   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:53.094556   48198 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:51:53.095022   48198 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:51:53.095211   48198 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:51:53.095308   48198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:51:53.095355   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:51:53.095415   48198 ssh_runner.go:195] Run: cat /version.json
	I0729 17:51:53.095455   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:51:53.097788   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:53.097859   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:53.098131   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:51:53.098157   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:53.098288   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:51:53.098301   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:51:53.098325   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:53.098492   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:51:53.098496   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:51:53.098665   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:51:53.098679   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:51:53.098913   48198 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/multinode-602258/id_rsa Username:docker}
	I0729 17:51:53.098954   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:51:53.099091   48198 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/multinode-602258/id_rsa Username:docker}
	I0729 17:51:53.175247   48198 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0729 17:51:53.175412   48198 ssh_runner.go:195] Run: systemctl --version
	I0729 17:51:53.198086   48198 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 17:51:53.198754   48198 command_runner.go:130] > systemd 252 (252)
	I0729 17:51:53.198784   48198 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 17:51:53.198845   48198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:51:53.360905   48198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 17:51:53.367180   48198 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 17:51:53.367259   48198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:51:53.367311   48198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:51:53.377377   48198 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 17:51:53.377400   48198 start.go:495] detecting cgroup driver to use...
	I0729 17:51:53.377463   48198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:51:53.394066   48198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:51:53.408358   48198 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:51:53.408406   48198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:51:53.422420   48198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:51:53.436355   48198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:51:53.579125   48198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:51:53.720116   48198 docker.go:233] disabling docker service ...
	I0729 17:51:53.720187   48198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:51:53.736266   48198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:51:53.750400   48198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:51:53.903881   48198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:51:54.058999   48198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:51:54.073866   48198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:51:54.092538   48198 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 17:51:54.092742   48198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:51:54.092819   48198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:54.103816   48198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:51:54.103886   48198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:54.115311   48198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:54.126778   48198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:54.137679   48198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:51:54.148631   48198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:54.159319   48198 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:54.169940   48198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:54.181067   48198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:51:54.190813   48198 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 17:51:54.190883   48198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:51:54.200584   48198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:51:54.349253   48198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:51:54.591887   48198 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:51:54.591975   48198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:51:54.596664   48198 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 17:51:54.596691   48198 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 17:51:54.596701   48198 command_runner.go:130] > Device: 0,22	Inode: 1328        Links: 1
	I0729 17:51:54.596712   48198 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 17:51:54.596720   48198 command_runner.go:130] > Access: 2024-07-29 17:51:54.450743080 +0000
	I0729 17:51:54.596725   48198 command_runner.go:130] > Modify: 2024-07-29 17:51:54.450743080 +0000
	I0729 17:51:54.596731   48198 command_runner.go:130] > Change: 2024-07-29 17:51:54.450743080 +0000
	I0729 17:51:54.596737   48198 command_runner.go:130] >  Birth: -
	I0729 17:51:54.596806   48198 start.go:563] Will wait 60s for crictl version
	I0729 17:51:54.596862   48198 ssh_runner.go:195] Run: which crictl
	I0729 17:51:54.600640   48198 command_runner.go:130] > /usr/bin/crictl
	I0729 17:51:54.600699   48198 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:51:54.641561   48198 command_runner.go:130] > Version:  0.1.0
	I0729 17:51:54.641586   48198 command_runner.go:130] > RuntimeName:  cri-o
	I0729 17:51:54.641591   48198 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 17:51:54.641596   48198 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 17:51:54.641613   48198 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:51:54.641665   48198 ssh_runner.go:195] Run: crio --version
	I0729 17:51:54.669212   48198 command_runner.go:130] > crio version 1.29.1
	I0729 17:51:54.669232   48198 command_runner.go:130] > Version:        1.29.1
	I0729 17:51:54.669237   48198 command_runner.go:130] > GitCommit:      unknown
	I0729 17:51:54.669242   48198 command_runner.go:130] > GitCommitDate:  unknown
	I0729 17:51:54.669264   48198 command_runner.go:130] > GitTreeState:   clean
	I0729 17:51:54.669269   48198 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 17:51:54.669274   48198 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 17:51:54.669277   48198 command_runner.go:130] > Compiler:       gc
	I0729 17:51:54.669281   48198 command_runner.go:130] > Platform:       linux/amd64
	I0729 17:51:54.669285   48198 command_runner.go:130] > Linkmode:       dynamic
	I0729 17:51:54.669289   48198 command_runner.go:130] > BuildTags:      
	I0729 17:51:54.669303   48198 command_runner.go:130] >   containers_image_ostree_stub
	I0729 17:51:54.669309   48198 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 17:51:54.669312   48198 command_runner.go:130] >   btrfs_noversion
	I0729 17:51:54.669316   48198 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 17:51:54.669321   48198 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 17:51:54.669324   48198 command_runner.go:130] >   seccomp
	I0729 17:51:54.669329   48198 command_runner.go:130] > LDFlags:          unknown
	I0729 17:51:54.669336   48198 command_runner.go:130] > SeccompEnabled:   true
	I0729 17:51:54.669339   48198 command_runner.go:130] > AppArmorEnabled:  false
	I0729 17:51:54.670610   48198 ssh_runner.go:195] Run: crio --version
	I0729 17:51:54.699198   48198 command_runner.go:130] > crio version 1.29.1
	I0729 17:51:54.699217   48198 command_runner.go:130] > Version:        1.29.1
	I0729 17:51:54.699223   48198 command_runner.go:130] > GitCommit:      unknown
	I0729 17:51:54.699228   48198 command_runner.go:130] > GitCommitDate:  unknown
	I0729 17:51:54.699231   48198 command_runner.go:130] > GitTreeState:   clean
	I0729 17:51:54.699236   48198 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 17:51:54.699240   48198 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 17:51:54.699250   48198 command_runner.go:130] > Compiler:       gc
	I0729 17:51:54.699255   48198 command_runner.go:130] > Platform:       linux/amd64
	I0729 17:51:54.699259   48198 command_runner.go:130] > Linkmode:       dynamic
	I0729 17:51:54.699265   48198 command_runner.go:130] > BuildTags:      
	I0729 17:51:54.699270   48198 command_runner.go:130] >   containers_image_ostree_stub
	I0729 17:51:54.699274   48198 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 17:51:54.699278   48198 command_runner.go:130] >   btrfs_noversion
	I0729 17:51:54.699283   48198 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 17:51:54.699290   48198 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 17:51:54.699293   48198 command_runner.go:130] >   seccomp
	I0729 17:51:54.699298   48198 command_runner.go:130] > LDFlags:          unknown
	I0729 17:51:54.699302   48198 command_runner.go:130] > SeccompEnabled:   true
	I0729 17:51:54.699306   48198 command_runner.go:130] > AppArmorEnabled:  false
	I0729 17:51:54.701243   48198 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:51:54.702677   48198 main.go:141] libmachine: (multinode-602258) Calling .GetIP
	I0729 17:51:54.705111   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:54.705497   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:51:54.705524   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:54.705722   48198 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:51:54.710647   48198 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 17:51:54.710810   48198 kubeadm.go:883] updating cluster {Name:multinode-602258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-602258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 17:51:54.710970   48198 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:51:54.711028   48198 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:51:54.755611   48198 command_runner.go:130] > {
	I0729 17:51:54.755634   48198 command_runner.go:130] >   "images": [
	I0729 17:51:54.755638   48198 command_runner.go:130] >     {
	I0729 17:51:54.755646   48198 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 17:51:54.755650   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.755656   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 17:51:54.755659   48198 command_runner.go:130] >       ],
	I0729 17:51:54.755663   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.755673   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 17:51:54.755684   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 17:51:54.755691   48198 command_runner.go:130] >       ],
	I0729 17:51:54.755698   48198 command_runner.go:130] >       "size": "87165492",
	I0729 17:51:54.755704   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.755712   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.755724   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.755734   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.755738   48198 command_runner.go:130] >     },
	I0729 17:51:54.755741   48198 command_runner.go:130] >     {
	I0729 17:51:54.755747   48198 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 17:51:54.755754   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.755760   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 17:51:54.755769   48198 command_runner.go:130] >       ],
	I0729 17:51:54.755777   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.755798   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 17:51:54.755812   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 17:51:54.755818   48198 command_runner.go:130] >       ],
	I0729 17:51:54.755827   48198 command_runner.go:130] >       "size": "87174707",
	I0729 17:51:54.755835   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.755852   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.755862   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.755869   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.755877   48198 command_runner.go:130] >     },
	I0729 17:51:54.755885   48198 command_runner.go:130] >     {
	I0729 17:51:54.755898   48198 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 17:51:54.755908   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.755918   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 17:51:54.755926   48198 command_runner.go:130] >       ],
	I0729 17:51:54.755932   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.755942   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 17:51:54.755957   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 17:51:54.755966   48198 command_runner.go:130] >       ],
	I0729 17:51:54.755976   48198 command_runner.go:130] >       "size": "1363676",
	I0729 17:51:54.755984   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.755994   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.756003   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.756011   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.756017   48198 command_runner.go:130] >     },
	I0729 17:51:54.756021   48198 command_runner.go:130] >     {
	I0729 17:51:54.756034   48198 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 17:51:54.756044   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.756055   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 17:51:54.756063   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756073   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.756093   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 17:51:54.756112   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 17:51:54.756120   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756127   48198 command_runner.go:130] >       "size": "31470524",
	I0729 17:51:54.756136   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.756146   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.756164   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.756173   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.756181   48198 command_runner.go:130] >     },
	I0729 17:51:54.756186   48198 command_runner.go:130] >     {
	I0729 17:51:54.756193   48198 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 17:51:54.756201   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.756213   48198 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 17:51:54.756222   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756231   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.756246   48198 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 17:51:54.756261   48198 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 17:51:54.756269   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756274   48198 command_runner.go:130] >       "size": "61245718",
	I0729 17:51:54.756278   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.756285   48198 command_runner.go:130] >       "username": "nonroot",
	I0729 17:51:54.756295   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.756303   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.756309   48198 command_runner.go:130] >     },
	I0729 17:51:54.756317   48198 command_runner.go:130] >     {
	I0729 17:51:54.756327   48198 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 17:51:54.756336   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.756346   48198 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 17:51:54.756353   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756358   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.756370   48198 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 17:51:54.756383   48198 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 17:51:54.756392   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756402   48198 command_runner.go:130] >       "size": "150779692",
	I0729 17:51:54.756409   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.756416   48198 command_runner.go:130] >         "value": "0"
	I0729 17:51:54.756425   48198 command_runner.go:130] >       },
	I0729 17:51:54.756435   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.756443   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.756449   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.756452   48198 command_runner.go:130] >     },
	I0729 17:51:54.756460   48198 command_runner.go:130] >     {
	I0729 17:51:54.756479   48198 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 17:51:54.756489   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.756500   48198 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 17:51:54.756508   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756517   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.756531   48198 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 17:51:54.756545   48198 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 17:51:54.756555   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756560   48198 command_runner.go:130] >       "size": "117609954",
	I0729 17:51:54.756565   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.756570   48198 command_runner.go:130] >         "value": "0"
	I0729 17:51:54.756575   48198 command_runner.go:130] >       },
	I0729 17:51:54.756581   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.756587   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.756594   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.756599   48198 command_runner.go:130] >     },
	I0729 17:51:54.756604   48198 command_runner.go:130] >     {
	I0729 17:51:54.756615   48198 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 17:51:54.756621   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.756630   48198 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 17:51:54.756635   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756646   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.756676   48198 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 17:51:54.756693   48198 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 17:51:54.756706   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756713   48198 command_runner.go:130] >       "size": "112198984",
	I0729 17:51:54.756719   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.756727   48198 command_runner.go:130] >         "value": "0"
	I0729 17:51:54.756733   48198 command_runner.go:130] >       },
	I0729 17:51:54.756820   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.756859   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.756869   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.756874   48198 command_runner.go:130] >     },
	I0729 17:51:54.756883   48198 command_runner.go:130] >     {
	I0729 17:51:54.756892   48198 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 17:51:54.756901   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.756926   48198 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 17:51:54.756936   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756943   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.756958   48198 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 17:51:54.756972   48198 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 17:51:54.756979   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756985   48198 command_runner.go:130] >       "size": "85953945",
	I0729 17:51:54.756992   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.756997   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.757001   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.757006   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.757015   48198 command_runner.go:130] >     },
	I0729 17:51:54.757018   48198 command_runner.go:130] >     {
	I0729 17:51:54.757025   48198 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 17:51:54.757031   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.757035   48198 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 17:51:54.757039   48198 command_runner.go:130] >       ],
	I0729 17:51:54.757043   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.757053   48198 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 17:51:54.757062   48198 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 17:51:54.757065   48198 command_runner.go:130] >       ],
	I0729 17:51:54.757069   48198 command_runner.go:130] >       "size": "63051080",
	I0729 17:51:54.757075   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.757079   48198 command_runner.go:130] >         "value": "0"
	I0729 17:51:54.757083   48198 command_runner.go:130] >       },
	I0729 17:51:54.757094   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.757101   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.757105   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.757111   48198 command_runner.go:130] >     },
	I0729 17:51:54.757114   48198 command_runner.go:130] >     {
	I0729 17:51:54.757120   48198 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 17:51:54.757126   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.757130   48198 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 17:51:54.757136   48198 command_runner.go:130] >       ],
	I0729 17:51:54.757140   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.757146   48198 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 17:51:54.757161   48198 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 17:51:54.757166   48198 command_runner.go:130] >       ],
	I0729 17:51:54.757170   48198 command_runner.go:130] >       "size": "750414",
	I0729 17:51:54.757174   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.757178   48198 command_runner.go:130] >         "value": "65535"
	I0729 17:51:54.757182   48198 command_runner.go:130] >       },
	I0729 17:51:54.757187   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.757191   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.757197   48198 command_runner.go:130] >       "pinned": true
	I0729 17:51:54.757200   48198 command_runner.go:130] >     }
	I0729 17:51:54.757203   48198 command_runner.go:130] >   ]
	I0729 17:51:54.757208   48198 command_runner.go:130] > }
	I0729 17:51:54.757400   48198 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 17:51:54.757411   48198 crio.go:433] Images already preloaded, skipping extraction
	I0729 17:51:54.757464   48198 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:51:54.794756   48198 command_runner.go:130] > {
	I0729 17:51:54.794777   48198 command_runner.go:130] >   "images": [
	I0729 17:51:54.794780   48198 command_runner.go:130] >     {
	I0729 17:51:54.794788   48198 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 17:51:54.794793   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.794798   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 17:51:54.794801   48198 command_runner.go:130] >       ],
	I0729 17:51:54.794808   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.794816   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 17:51:54.794823   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 17:51:54.794826   48198 command_runner.go:130] >       ],
	I0729 17:51:54.794830   48198 command_runner.go:130] >       "size": "87165492",
	I0729 17:51:54.794834   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.794837   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.794851   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.794858   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.794863   48198 command_runner.go:130] >     },
	I0729 17:51:54.794866   48198 command_runner.go:130] >     {
	I0729 17:51:54.794871   48198 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 17:51:54.794876   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.794881   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 17:51:54.794884   48198 command_runner.go:130] >       ],
	I0729 17:51:54.794888   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.794895   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 17:51:54.794903   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 17:51:54.794906   48198 command_runner.go:130] >       ],
	I0729 17:51:54.794912   48198 command_runner.go:130] >       "size": "87174707",
	I0729 17:51:54.794916   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.794923   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.794930   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.794933   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.794943   48198 command_runner.go:130] >     },
	I0729 17:51:54.794950   48198 command_runner.go:130] >     {
	I0729 17:51:54.794956   48198 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 17:51:54.794960   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.794965   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 17:51:54.794968   48198 command_runner.go:130] >       ],
	I0729 17:51:54.794972   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.794980   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 17:51:54.794987   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 17:51:54.794991   48198 command_runner.go:130] >       ],
	I0729 17:51:54.794995   48198 command_runner.go:130] >       "size": "1363676",
	I0729 17:51:54.794999   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.795003   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.795007   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795011   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.795015   48198 command_runner.go:130] >     },
	I0729 17:51:54.795018   48198 command_runner.go:130] >     {
	I0729 17:51:54.795024   48198 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 17:51:54.795029   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.795034   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 17:51:54.795037   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795041   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.795051   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 17:51:54.795070   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 17:51:54.795076   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795079   48198 command_runner.go:130] >       "size": "31470524",
	I0729 17:51:54.795083   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.795087   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.795092   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795097   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.795102   48198 command_runner.go:130] >     },
	I0729 17:51:54.795105   48198 command_runner.go:130] >     {
	I0729 17:51:54.795111   48198 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 17:51:54.795118   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.795122   48198 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 17:51:54.795126   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795135   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.795145   48198 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 17:51:54.795152   48198 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 17:51:54.795156   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795161   48198 command_runner.go:130] >       "size": "61245718",
	I0729 17:51:54.795166   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.795173   48198 command_runner.go:130] >       "username": "nonroot",
	I0729 17:51:54.795176   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795181   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.795186   48198 command_runner.go:130] >     },
	I0729 17:51:54.795190   48198 command_runner.go:130] >     {
	I0729 17:51:54.795195   48198 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 17:51:54.795202   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.795206   48198 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 17:51:54.795210   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795214   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.795223   48198 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 17:51:54.795233   48198 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 17:51:54.795237   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795241   48198 command_runner.go:130] >       "size": "150779692",
	I0729 17:51:54.795245   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.795250   48198 command_runner.go:130] >         "value": "0"
	I0729 17:51:54.795255   48198 command_runner.go:130] >       },
	I0729 17:51:54.795259   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.795263   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795267   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.795273   48198 command_runner.go:130] >     },
	I0729 17:51:54.795276   48198 command_runner.go:130] >     {
	I0729 17:51:54.795282   48198 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 17:51:54.795288   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.795293   48198 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 17:51:54.795299   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795302   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.795312   48198 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 17:51:54.795321   48198 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 17:51:54.795324   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795332   48198 command_runner.go:130] >       "size": "117609954",
	I0729 17:51:54.795338   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.795342   48198 command_runner.go:130] >         "value": "0"
	I0729 17:51:54.795345   48198 command_runner.go:130] >       },
	I0729 17:51:54.795351   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.795355   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795361   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.795365   48198 command_runner.go:130] >     },
	I0729 17:51:54.795372   48198 command_runner.go:130] >     {
	I0729 17:51:54.795380   48198 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 17:51:54.795386   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.795392   48198 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 17:51:54.795398   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795402   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.795424   48198 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 17:51:54.795433   48198 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 17:51:54.795439   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795443   48198 command_runner.go:130] >       "size": "112198984",
	I0729 17:51:54.795447   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.795453   48198 command_runner.go:130] >         "value": "0"
	I0729 17:51:54.795457   48198 command_runner.go:130] >       },
	I0729 17:51:54.795463   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.795467   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795473   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.795476   48198 command_runner.go:130] >     },
	I0729 17:51:54.795482   48198 command_runner.go:130] >     {
	I0729 17:51:54.795488   48198 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 17:51:54.795494   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.795499   48198 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 17:51:54.795504   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795508   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.795517   48198 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 17:51:54.795526   48198 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 17:51:54.795531   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795535   48198 command_runner.go:130] >       "size": "85953945",
	I0729 17:51:54.795539   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.795549   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.795556   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795560   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.795565   48198 command_runner.go:130] >     },
	I0729 17:51:54.795568   48198 command_runner.go:130] >     {
	I0729 17:51:54.795576   48198 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 17:51:54.795580   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.795587   48198 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 17:51:54.795601   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795607   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.795614   48198 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 17:51:54.795624   48198 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 17:51:54.795630   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795634   48198 command_runner.go:130] >       "size": "63051080",
	I0729 17:51:54.795638   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.795643   48198 command_runner.go:130] >         "value": "0"
	I0729 17:51:54.795647   48198 command_runner.go:130] >       },
	I0729 17:51:54.795653   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.795656   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795660   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.795663   48198 command_runner.go:130] >     },
	I0729 17:51:54.795667   48198 command_runner.go:130] >     {
	I0729 17:51:54.795673   48198 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 17:51:54.795679   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.795684   48198 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 17:51:54.795689   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795693   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.795700   48198 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 17:51:54.795708   48198 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 17:51:54.795712   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795718   48198 command_runner.go:130] >       "size": "750414",
	I0729 17:51:54.795722   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.795726   48198 command_runner.go:130] >         "value": "65535"
	I0729 17:51:54.795729   48198 command_runner.go:130] >       },
	I0729 17:51:54.795733   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.795737   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795746   48198 command_runner.go:130] >       "pinned": true
	I0729 17:51:54.795752   48198 command_runner.go:130] >     }
	I0729 17:51:54.795755   48198 command_runner.go:130] >   ]
	I0729 17:51:54.795760   48198 command_runner.go:130] > }
	I0729 17:51:54.796204   48198 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 17:51:54.796224   48198 cache_images.go:84] Images are preloaded, skipping loading
	I0729 17:51:54.796237   48198 kubeadm.go:934] updating node { 192.168.39.218 8443 v1.30.3 crio true true} ...
	I0729 17:51:54.796333   48198 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-602258 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-602258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:51:54.796390   48198 ssh_runner.go:195] Run: crio config
	I0729 17:51:54.834317   48198 command_runner.go:130] ! time="2024-07-29 17:51:54.801427832Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 17:51:54.840032   48198 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 17:51:54.852565   48198 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 17:51:54.852593   48198 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 17:51:54.852608   48198 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 17:51:54.852613   48198 command_runner.go:130] > #
	I0729 17:51:54.852624   48198 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 17:51:54.852633   48198 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 17:51:54.852643   48198 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 17:51:54.852659   48198 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 17:51:54.852668   48198 command_runner.go:130] > # reload'.
	I0729 17:51:54.852681   48198 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 17:51:54.852692   48198 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 17:51:54.852701   48198 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 17:51:54.852709   48198 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 17:51:54.852715   48198 command_runner.go:130] > [crio]
	I0729 17:51:54.852721   48198 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 17:51:54.852728   48198 command_runner.go:130] > # containers images, in this directory.
	I0729 17:51:54.852732   48198 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 17:51:54.852744   48198 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 17:51:54.852756   48198 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 17:51:54.852767   48198 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 17:51:54.852776   48198 command_runner.go:130] > # imagestore = ""
	I0729 17:51:54.852788   48198 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 17:51:54.852800   48198 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 17:51:54.852809   48198 command_runner.go:130] > storage_driver = "overlay"
	I0729 17:51:54.852820   48198 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 17:51:54.852832   48198 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 17:51:54.852841   48198 command_runner.go:130] > storage_option = [
	I0729 17:51:54.852849   48198 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 17:51:54.852857   48198 command_runner.go:130] > ]
	I0729 17:51:54.852867   48198 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 17:51:54.852880   48198 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 17:51:54.852890   48198 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 17:51:54.852902   48198 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 17:51:54.852913   48198 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 17:51:54.852920   48198 command_runner.go:130] > # always happen on a node reboot
	I0729 17:51:54.852925   48198 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 17:51:54.852939   48198 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 17:51:54.852946   48198 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 17:51:54.852952   48198 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 17:51:54.852958   48198 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 17:51:54.852966   48198 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 17:51:54.852977   48198 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 17:51:54.852983   48198 command_runner.go:130] > # internal_wipe = true
	I0729 17:51:54.852991   48198 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 17:51:54.852998   48198 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 17:51:54.853002   48198 command_runner.go:130] > # internal_repair = false
	I0729 17:51:54.853008   48198 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 17:51:54.853016   48198 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 17:51:54.853022   48198 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 17:51:54.853030   48198 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 17:51:54.853036   48198 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 17:51:54.853041   48198 command_runner.go:130] > [crio.api]
	I0729 17:51:54.853046   48198 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 17:51:54.853052   48198 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 17:51:54.853062   48198 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 17:51:54.853069   48198 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 17:51:54.853075   48198 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 17:51:54.853082   48198 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 17:51:54.853086   48198 command_runner.go:130] > # stream_port = "0"
	I0729 17:51:54.853093   48198 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 17:51:54.853101   48198 command_runner.go:130] > # stream_enable_tls = false
	I0729 17:51:54.853109   48198 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 17:51:54.853113   48198 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 17:51:54.853119   48198 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 17:51:54.853127   48198 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 17:51:54.853131   48198 command_runner.go:130] > # minutes.
	I0729 17:51:54.853135   48198 command_runner.go:130] > # stream_tls_cert = ""
	I0729 17:51:54.853143   48198 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 17:51:54.853151   48198 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 17:51:54.853155   48198 command_runner.go:130] > # stream_tls_key = ""
	I0729 17:51:54.853163   48198 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 17:51:54.853171   48198 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 17:51:54.853194   48198 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 17:51:54.853201   48198 command_runner.go:130] > # stream_tls_ca = ""
	I0729 17:51:54.853208   48198 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 17:51:54.853215   48198 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 17:51:54.853222   48198 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 17:51:54.853228   48198 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 17:51:54.853234   48198 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 17:51:54.853241   48198 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 17:51:54.853245   48198 command_runner.go:130] > [crio.runtime]
	I0729 17:51:54.853252   48198 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 17:51:54.853259   48198 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 17:51:54.853263   48198 command_runner.go:130] > # "nofile=1024:2048"
	I0729 17:51:54.853271   48198 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 17:51:54.853276   48198 command_runner.go:130] > # default_ulimits = [
	I0729 17:51:54.853281   48198 command_runner.go:130] > # ]
	I0729 17:51:54.853287   48198 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 17:51:54.853293   48198 command_runner.go:130] > # no_pivot = false
	I0729 17:51:54.853299   48198 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 17:51:54.853310   48198 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 17:51:54.853317   48198 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 17:51:54.853323   48198 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 17:51:54.853329   48198 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 17:51:54.853336   48198 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 17:51:54.853342   48198 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 17:51:54.853346   48198 command_runner.go:130] > # Cgroup setting for conmon
	I0729 17:51:54.853354   48198 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 17:51:54.853359   48198 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 17:51:54.853365   48198 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 17:51:54.853371   48198 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 17:51:54.853378   48198 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 17:51:54.853384   48198 command_runner.go:130] > conmon_env = [
	I0729 17:51:54.853390   48198 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 17:51:54.853395   48198 command_runner.go:130] > ]
	I0729 17:51:54.853401   48198 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 17:51:54.853407   48198 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 17:51:54.853412   48198 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 17:51:54.853418   48198 command_runner.go:130] > # default_env = [
	I0729 17:51:54.853421   48198 command_runner.go:130] > # ]
	I0729 17:51:54.853430   48198 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 17:51:54.853436   48198 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 17:51:54.853442   48198 command_runner.go:130] > # selinux = false
	I0729 17:51:54.853448   48198 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 17:51:54.853456   48198 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 17:51:54.853465   48198 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 17:51:54.853471   48198 command_runner.go:130] > # seccomp_profile = ""
	I0729 17:51:54.853477   48198 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 17:51:54.853484   48198 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 17:51:54.853492   48198 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 17:51:54.853496   48198 command_runner.go:130] > # which might increase security.
	I0729 17:51:54.853503   48198 command_runner.go:130] > # This option is currently deprecated,
	I0729 17:51:54.853508   48198 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 17:51:54.853515   48198 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 17:51:54.853520   48198 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 17:51:54.853528   48198 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 17:51:54.853538   48198 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 17:51:54.853546   48198 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 17:51:54.853554   48198 command_runner.go:130] > # This option supports live configuration reload.
	I0729 17:51:54.853560   48198 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 17:51:54.853566   48198 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 17:51:54.853573   48198 command_runner.go:130] > # the cgroup blockio controller.
	I0729 17:51:54.853577   48198 command_runner.go:130] > # blockio_config_file = ""
	I0729 17:51:54.853586   48198 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 17:51:54.853592   48198 command_runner.go:130] > # blockio parameters.
	I0729 17:51:54.853596   48198 command_runner.go:130] > # blockio_reload = false
	I0729 17:51:54.853604   48198 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 17:51:54.853613   48198 command_runner.go:130] > # irqbalance daemon.
	I0729 17:51:54.853620   48198 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 17:51:54.853625   48198 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 17:51:54.853636   48198 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 17:51:54.853645   48198 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 17:51:54.853652   48198 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 17:51:54.853660   48198 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 17:51:54.853666   48198 command_runner.go:130] > # This option supports live configuration reload.
	I0729 17:51:54.853672   48198 command_runner.go:130] > # rdt_config_file = ""
	I0729 17:51:54.853677   48198 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 17:51:54.853683   48198 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 17:51:54.853711   48198 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 17:51:54.853719   48198 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 17:51:54.853724   48198 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 17:51:54.853731   48198 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 17:51:54.853736   48198 command_runner.go:130] > # will be added.
	I0729 17:51:54.853741   48198 command_runner.go:130] > # default_capabilities = [
	I0729 17:51:54.853746   48198 command_runner.go:130] > # 	"CHOWN",
	I0729 17:51:54.853750   48198 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 17:51:54.853756   48198 command_runner.go:130] > # 	"FSETID",
	I0729 17:51:54.853759   48198 command_runner.go:130] > # 	"FOWNER",
	I0729 17:51:54.853765   48198 command_runner.go:130] > # 	"SETGID",
	I0729 17:51:54.853776   48198 command_runner.go:130] > # 	"SETUID",
	I0729 17:51:54.853785   48198 command_runner.go:130] > # 	"SETPCAP",
	I0729 17:51:54.853794   48198 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 17:51:54.853808   48198 command_runner.go:130] > # 	"KILL",
	I0729 17:51:54.853816   48198 command_runner.go:130] > # ]
	I0729 17:51:54.853829   48198 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 17:51:54.853843   48198 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 17:51:54.853853   48198 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 17:51:54.853865   48198 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 17:51:54.853875   48198 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 17:51:54.853882   48198 command_runner.go:130] > default_sysctls = [
	I0729 17:51:54.853886   48198 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 17:51:54.853892   48198 command_runner.go:130] > ]
	I0729 17:51:54.853896   48198 command_runner.go:130] > # List of devices on the host that a
	I0729 17:51:54.853904   48198 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 17:51:54.853910   48198 command_runner.go:130] > # allowed_devices = [
	I0729 17:51:54.853914   48198 command_runner.go:130] > # 	"/dev/fuse",
	I0729 17:51:54.853919   48198 command_runner.go:130] > # ]
	I0729 17:51:54.853923   48198 command_runner.go:130] > # List of additional devices. specified as
	I0729 17:51:54.853932   48198 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 17:51:54.853940   48198 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 17:51:54.853945   48198 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 17:51:54.853951   48198 command_runner.go:130] > # additional_devices = [
	I0729 17:51:54.853954   48198 command_runner.go:130] > # ]
	I0729 17:51:54.853959   48198 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 17:51:54.853965   48198 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 17:51:54.853969   48198 command_runner.go:130] > # 	"/etc/cdi",
	I0729 17:51:54.853975   48198 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 17:51:54.853979   48198 command_runner.go:130] > # ]
	I0729 17:51:54.853987   48198 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 17:51:54.853994   48198 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 17:51:54.853998   48198 command_runner.go:130] > # Defaults to false.
	I0729 17:51:54.854005   48198 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 17:51:54.854011   48198 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 17:51:54.854019   48198 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 17:51:54.854022   48198 command_runner.go:130] > # hooks_dir = [
	I0729 17:51:54.854027   48198 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 17:51:54.854031   48198 command_runner.go:130] > # ]
	I0729 17:51:54.854039   48198 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 17:51:54.854050   48198 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 17:51:54.854057   48198 command_runner.go:130] > # its default mounts from the following two files:
	I0729 17:51:54.854060   48198 command_runner.go:130] > #
	I0729 17:51:54.854066   48198 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 17:51:54.854075   48198 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 17:51:54.854082   48198 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 17:51:54.854085   48198 command_runner.go:130] > #
	I0729 17:51:54.854091   48198 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 17:51:54.854102   48198 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 17:51:54.854110   48198 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 17:51:54.854115   48198 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 17:51:54.854119   48198 command_runner.go:130] > #
	I0729 17:51:54.854123   48198 command_runner.go:130] > # default_mounts_file = ""
	I0729 17:51:54.854128   48198 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 17:51:54.854137   48198 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 17:51:54.854143   48198 command_runner.go:130] > pids_limit = 1024
	I0729 17:51:54.854148   48198 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 17:51:54.854156   48198 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 17:51:54.854163   48198 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 17:51:54.854173   48198 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 17:51:54.854178   48198 command_runner.go:130] > # log_size_max = -1
	I0729 17:51:54.854185   48198 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 17:51:54.854191   48198 command_runner.go:130] > # log_to_journald = false
	I0729 17:51:54.854197   48198 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 17:51:54.854204   48198 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 17:51:54.854209   48198 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 17:51:54.854216   48198 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 17:51:54.854221   48198 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 17:51:54.854227   48198 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 17:51:54.854232   48198 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 17:51:54.854238   48198 command_runner.go:130] > # read_only = false
	I0729 17:51:54.854243   48198 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 17:51:54.854251   48198 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 17:51:54.854258   48198 command_runner.go:130] > # live configuration reload.
	I0729 17:51:54.854262   48198 command_runner.go:130] > # log_level = "info"
	I0729 17:51:54.854269   48198 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 17:51:54.854279   48198 command_runner.go:130] > # This option supports live configuration reload.
	I0729 17:51:54.854285   48198 command_runner.go:130] > # log_filter = ""
	I0729 17:51:54.854291   48198 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 17:51:54.854300   48198 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 17:51:54.854306   48198 command_runner.go:130] > # separated by comma.
	I0729 17:51:54.854329   48198 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 17:51:54.854341   48198 command_runner.go:130] > # uid_mappings = ""
	I0729 17:51:54.854349   48198 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 17:51:54.854357   48198 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 17:51:54.854382   48198 command_runner.go:130] > # separated by comma.
	I0729 17:51:54.854391   48198 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 17:51:54.854398   48198 command_runner.go:130] > # gid_mappings = ""
	I0729 17:51:54.854404   48198 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 17:51:54.854412   48198 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 17:51:54.854420   48198 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 17:51:54.854429   48198 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 17:51:54.854435   48198 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 17:51:54.854441   48198 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 17:51:54.854449   48198 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 17:51:54.854457   48198 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 17:51:54.854464   48198 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 17:51:54.854470   48198 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 17:51:54.854476   48198 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 17:51:54.854484   48198 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 17:51:54.854491   48198 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 17:51:54.854495   48198 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 17:51:54.854501   48198 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 17:51:54.854509   48198 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 17:51:54.854516   48198 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 17:51:54.854520   48198 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 17:51:54.854526   48198 command_runner.go:130] > drop_infra_ctr = false
	I0729 17:51:54.854532   48198 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 17:51:54.854540   48198 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 17:51:54.854547   48198 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 17:51:54.854553   48198 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 17:51:54.854559   48198 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 17:51:54.854575   48198 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 17:51:54.854582   48198 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 17:51:54.854589   48198 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 17:51:54.854593   48198 command_runner.go:130] > # shared_cpuset = ""
	I0729 17:51:54.854601   48198 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 17:51:54.854606   48198 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 17:51:54.854612   48198 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 17:51:54.854618   48198 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 17:51:54.854624   48198 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 17:51:54.854630   48198 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 17:51:54.854637   48198 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 17:51:54.854641   48198 command_runner.go:130] > # enable_criu_support = false
	I0729 17:51:54.854648   48198 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 17:51:54.854654   48198 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 17:51:54.854660   48198 command_runner.go:130] > # enable_pod_events = false
	I0729 17:51:54.854665   48198 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 17:51:54.854673   48198 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 17:51:54.854679   48198 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 17:51:54.854684   48198 command_runner.go:130] > # default_runtime = "runc"
	I0729 17:51:54.854689   48198 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 17:51:54.854698   48198 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 17:51:54.854709   48198 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 17:51:54.854716   48198 command_runner.go:130] > # creation as a file is not desired either.
	I0729 17:51:54.854724   48198 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 17:51:54.854730   48198 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 17:51:54.854735   48198 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 17:51:54.854740   48198 command_runner.go:130] > # ]
	I0729 17:51:54.854746   48198 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 17:51:54.854754   48198 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 17:51:54.854761   48198 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 17:51:54.854770   48198 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 17:51:54.854779   48198 command_runner.go:130] > #
	I0729 17:51:54.854788   48198 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 17:51:54.854799   48198 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 17:51:54.854860   48198 command_runner.go:130] > # runtime_type = "oci"
	I0729 17:51:54.854870   48198 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 17:51:54.854879   48198 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 17:51:54.854886   48198 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 17:51:54.854890   48198 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 17:51:54.854896   48198 command_runner.go:130] > # monitor_env = []
	I0729 17:51:54.854901   48198 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 17:51:54.854905   48198 command_runner.go:130] > # allowed_annotations = []
	I0729 17:51:54.854912   48198 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 17:51:54.854915   48198 command_runner.go:130] > # Where:
	I0729 17:51:54.854921   48198 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 17:51:54.854929   48198 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 17:51:54.854935   48198 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 17:51:54.854943   48198 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 17:51:54.854947   48198 command_runner.go:130] > #   in $PATH.
	I0729 17:51:54.854954   48198 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 17:51:54.854960   48198 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 17:51:54.854966   48198 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 17:51:54.854971   48198 command_runner.go:130] > #   state.
	I0729 17:51:54.854978   48198 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 17:51:54.854985   48198 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 17:51:54.854994   48198 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 17:51:54.855002   48198 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 17:51:54.855008   48198 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 17:51:54.855016   48198 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 17:51:54.855023   48198 command_runner.go:130] > #   The currently recognized values are:
	I0729 17:51:54.855029   48198 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 17:51:54.855038   48198 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 17:51:54.855048   48198 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 17:51:54.855055   48198 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 17:51:54.855065   48198 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 17:51:54.855073   48198 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 17:51:54.855080   48198 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 17:51:54.855087   48198 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 17:51:54.855093   48198 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 17:51:54.855105   48198 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 17:51:54.855111   48198 command_runner.go:130] > #   deprecated option "conmon".
	I0729 17:51:54.855118   48198 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 17:51:54.855130   48198 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 17:51:54.855138   48198 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 17:51:54.855145   48198 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 17:51:54.855151   48198 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 17:51:54.855158   48198 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 17:51:54.855164   48198 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 17:51:54.855172   48198 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 17:51:54.855175   48198 command_runner.go:130] > #
	I0729 17:51:54.855180   48198 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 17:51:54.855185   48198 command_runner.go:130] > #
	I0729 17:51:54.855190   48198 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 17:51:54.855198   48198 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 17:51:54.855202   48198 command_runner.go:130] > #
	I0729 17:51:54.855208   48198 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 17:51:54.855216   48198 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 17:51:54.855221   48198 command_runner.go:130] > #
	I0729 17:51:54.855227   48198 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 17:51:54.855232   48198 command_runner.go:130] > # feature.
	I0729 17:51:54.855235   48198 command_runner.go:130] > #
	I0729 17:51:54.855240   48198 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 17:51:54.855248   48198 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 17:51:54.855254   48198 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 17:51:54.855261   48198 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 17:51:54.855267   48198 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 17:51:54.855272   48198 command_runner.go:130] > #
	I0729 17:51:54.855278   48198 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 17:51:54.855286   48198 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 17:51:54.855290   48198 command_runner.go:130] > #
	I0729 17:51:54.855296   48198 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 17:51:54.855303   48198 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 17:51:54.855307   48198 command_runner.go:130] > #
	I0729 17:51:54.855313   48198 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 17:51:54.855320   48198 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 17:51:54.855324   48198 command_runner.go:130] > # limitation.
	I0729 17:51:54.855332   48198 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 17:51:54.855338   48198 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 17:51:54.855346   48198 command_runner.go:130] > runtime_type = "oci"
	I0729 17:51:54.855353   48198 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 17:51:54.855357   48198 command_runner.go:130] > runtime_config_path = ""
	I0729 17:51:54.855363   48198 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 17:51:54.855367   48198 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 17:51:54.855374   48198 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 17:51:54.855378   48198 command_runner.go:130] > monitor_env = [
	I0729 17:51:54.855385   48198 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 17:51:54.855391   48198 command_runner.go:130] > ]
	I0729 17:51:54.855395   48198 command_runner.go:130] > privileged_without_host_devices = false
	I0729 17:51:54.855403   48198 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 17:51:54.855413   48198 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 17:51:54.855421   48198 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 17:51:54.855430   48198 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 17:51:54.855440   48198 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 17:51:54.855447   48198 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 17:51:54.855455   48198 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 17:51:54.855465   48198 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 17:51:54.855470   48198 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 17:51:54.855477   48198 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 17:51:54.855480   48198 command_runner.go:130] > # Example:
	I0729 17:51:54.855484   48198 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 17:51:54.855488   48198 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 17:51:54.855493   48198 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 17:51:54.855497   48198 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 17:51:54.855500   48198 command_runner.go:130] > # cpuset = 0
	I0729 17:51:54.855504   48198 command_runner.go:130] > # cpushares = "0-1"
	I0729 17:51:54.855507   48198 command_runner.go:130] > # Where:
	I0729 17:51:54.855511   48198 command_runner.go:130] > # The workload name is workload-type.
	I0729 17:51:54.855517   48198 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 17:51:54.855522   48198 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 17:51:54.855527   48198 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 17:51:54.855533   48198 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 17:51:54.855543   48198 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 17:51:54.855548   48198 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 17:51:54.855554   48198 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 17:51:54.855563   48198 command_runner.go:130] > # Default value is set to true
	I0729 17:51:54.855570   48198 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 17:51:54.855575   48198 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 17:51:54.855581   48198 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 17:51:54.855586   48198 command_runner.go:130] > # Default value is set to 'false'
	I0729 17:51:54.855591   48198 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 17:51:54.855597   48198 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 17:51:54.855602   48198 command_runner.go:130] > #
	I0729 17:51:54.855607   48198 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 17:51:54.855614   48198 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 17:51:54.855620   48198 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 17:51:54.855628   48198 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 17:51:54.855635   48198 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 17:51:54.855639   48198 command_runner.go:130] > [crio.image]
	I0729 17:51:54.855644   48198 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 17:51:54.855651   48198 command_runner.go:130] > # default_transport = "docker://"
	I0729 17:51:54.855656   48198 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 17:51:54.855665   48198 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 17:51:54.855671   48198 command_runner.go:130] > # global_auth_file = ""
	I0729 17:51:54.855676   48198 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 17:51:54.855682   48198 command_runner.go:130] > # This option supports live configuration reload.
	I0729 17:51:54.855687   48198 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 17:51:54.855695   48198 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 17:51:54.855703   48198 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 17:51:54.855708   48198 command_runner.go:130] > # This option supports live configuration reload.
	I0729 17:51:54.855714   48198 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 17:51:54.855719   48198 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 17:51:54.855726   48198 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 17:51:54.855734   48198 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 17:51:54.855741   48198 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 17:51:54.855745   48198 command_runner.go:130] > # pause_command = "/pause"
	I0729 17:51:54.855753   48198 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 17:51:54.855760   48198 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 17:51:54.855766   48198 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 17:51:54.855782   48198 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 17:51:54.855793   48198 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 17:51:54.855810   48198 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 17:51:54.855819   48198 command_runner.go:130] > # pinned_images = [
	I0729 17:51:54.855824   48198 command_runner.go:130] > # ]
	I0729 17:51:54.855835   48198 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 17:51:54.855848   48198 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 17:51:54.855861   48198 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 17:51:54.855872   48198 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 17:51:54.855880   48198 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 17:51:54.855883   48198 command_runner.go:130] > # signature_policy = ""
	I0729 17:51:54.855891   48198 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 17:51:54.855897   48198 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 17:51:54.855905   48198 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 17:51:54.855913   48198 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 17:51:54.855919   48198 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 17:51:54.855926   48198 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 17:51:54.855931   48198 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 17:51:54.855942   48198 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 17:51:54.855948   48198 command_runner.go:130] > # changing them here.
	I0729 17:51:54.855952   48198 command_runner.go:130] > # insecure_registries = [
	I0729 17:51:54.855958   48198 command_runner.go:130] > # ]
	I0729 17:51:54.855963   48198 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 17:51:54.855970   48198 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 17:51:54.855974   48198 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 17:51:54.855980   48198 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 17:51:54.855985   48198 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 17:51:54.855991   48198 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 17:51:54.855997   48198 command_runner.go:130] > # CNI plugins.
	I0729 17:51:54.856001   48198 command_runner.go:130] > [crio.network]
	I0729 17:51:54.856008   48198 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 17:51:54.856013   48198 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 17:51:54.856019   48198 command_runner.go:130] > # cni_default_network = ""
	I0729 17:51:54.856025   48198 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 17:51:54.856031   48198 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 17:51:54.856037   48198 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 17:51:54.856043   48198 command_runner.go:130] > # plugin_dirs = [
	I0729 17:51:54.856046   48198 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 17:51:54.856057   48198 command_runner.go:130] > # ]
	I0729 17:51:54.856065   48198 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 17:51:54.856071   48198 command_runner.go:130] > [crio.metrics]
	I0729 17:51:54.856075   48198 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 17:51:54.856081   48198 command_runner.go:130] > enable_metrics = true
	I0729 17:51:54.856086   48198 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 17:51:54.856092   48198 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 17:51:54.856102   48198 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 17:51:54.856110   48198 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 17:51:54.856118   48198 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 17:51:54.856124   48198 command_runner.go:130] > # metrics_collectors = [
	I0729 17:51:54.856128   48198 command_runner.go:130] > # 	"operations",
	I0729 17:51:54.856134   48198 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 17:51:54.856139   48198 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 17:51:54.856145   48198 command_runner.go:130] > # 	"operations_errors",
	I0729 17:51:54.856149   48198 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 17:51:54.856155   48198 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 17:51:54.856159   48198 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 17:51:54.856166   48198 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 17:51:54.856169   48198 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 17:51:54.856173   48198 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 17:51:54.856179   48198 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 17:51:54.856184   48198 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 17:51:54.856190   48198 command_runner.go:130] > # 	"containers_oom_total",
	I0729 17:51:54.856194   48198 command_runner.go:130] > # 	"containers_oom",
	I0729 17:51:54.856200   48198 command_runner.go:130] > # 	"processes_defunct",
	I0729 17:51:54.856204   48198 command_runner.go:130] > # 	"operations_total",
	I0729 17:51:54.856211   48198 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 17:51:54.856215   48198 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 17:51:54.856221   48198 command_runner.go:130] > # 	"operations_errors_total",
	I0729 17:51:54.856225   48198 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 17:51:54.856232   48198 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 17:51:54.856236   48198 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 17:51:54.856242   48198 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 17:51:54.856246   48198 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 17:51:54.856252   48198 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 17:51:54.856261   48198 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 17:51:54.856267   48198 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 17:51:54.856270   48198 command_runner.go:130] > # ]
	I0729 17:51:54.856277   48198 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 17:51:54.856281   48198 command_runner.go:130] > # metrics_port = 9090
	I0729 17:51:54.856288   48198 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 17:51:54.856292   48198 command_runner.go:130] > # metrics_socket = ""
	I0729 17:51:54.856299   48198 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 17:51:54.856307   48198 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 17:51:54.856314   48198 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 17:51:54.856324   48198 command_runner.go:130] > # certificate on any modification event.
	I0729 17:51:54.856330   48198 command_runner.go:130] > # metrics_cert = ""
	I0729 17:51:54.856334   48198 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 17:51:54.856341   48198 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 17:51:54.856345   48198 command_runner.go:130] > # metrics_key = ""
	I0729 17:51:54.856350   48198 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 17:51:54.856356   48198 command_runner.go:130] > [crio.tracing]
	I0729 17:51:54.856361   48198 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 17:51:54.856367   48198 command_runner.go:130] > # enable_tracing = false
	I0729 17:51:54.856372   48198 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 17:51:54.856379   48198 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 17:51:54.856385   48198 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 17:51:54.856392   48198 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 17:51:54.856396   48198 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 17:51:54.856399   48198 command_runner.go:130] > [crio.nri]
	I0729 17:51:54.856406   48198 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 17:51:54.856410   48198 command_runner.go:130] > # enable_nri = false
	I0729 17:51:54.856416   48198 command_runner.go:130] > # NRI socket to listen on.
	I0729 17:51:54.856421   48198 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 17:51:54.856427   48198 command_runner.go:130] > # NRI plugin directory to use.
	I0729 17:51:54.856431   48198 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 17:51:54.856437   48198 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 17:51:54.856442   48198 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 17:51:54.856449   48198 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 17:51:54.856453   48198 command_runner.go:130] > # nri_disable_connections = false
	I0729 17:51:54.856460   48198 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 17:51:54.856474   48198 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 17:51:54.856481   48198 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 17:51:54.856485   48198 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 17:51:54.856493   48198 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 17:51:54.856498   48198 command_runner.go:130] > [crio.stats]
	I0729 17:51:54.856504   48198 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 17:51:54.856511   48198 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 17:51:54.856515   48198 command_runner.go:130] > # stats_collection_period = 0
	I0729 17:51:54.856667   48198 cni.go:84] Creating CNI manager for ""
	I0729 17:51:54.856682   48198 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 17:51:54.856693   48198 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 17:51:54.856713   48198 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.218 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-602258 NodeName:multinode-602258 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 17:51:54.856878   48198 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-602258"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 17:51:54.856953   48198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:51:54.866771   48198 command_runner.go:130] > kubeadm
	I0729 17:51:54.866792   48198 command_runner.go:130] > kubectl
	I0729 17:51:54.866799   48198 command_runner.go:130] > kubelet
	I0729 17:51:54.866856   48198 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 17:51:54.866919   48198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 17:51:54.876681   48198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0729 17:51:54.894889   48198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:51:54.912236   48198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0729 17:51:54.929376   48198 ssh_runner.go:195] Run: grep 192.168.39.218	control-plane.minikube.internal$ /etc/hosts
	I0729 17:51:54.933233   48198 command_runner.go:130] > 192.168.39.218	control-plane.minikube.internal
	I0729 17:51:54.933307   48198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:51:55.072715   48198 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:51:55.087326   48198 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258 for IP: 192.168.39.218
	I0729 17:51:55.087349   48198 certs.go:194] generating shared ca certs ...
	I0729 17:51:55.087364   48198 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:51:55.087565   48198 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 17:51:55.087619   48198 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 17:51:55.087636   48198 certs.go:256] generating profile certs ...
	I0729 17:51:55.087784   48198 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/client.key
	I0729 17:51:55.087868   48198 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/apiserver.key.b59fdcf4
	I0729 17:51:55.087937   48198 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/proxy-client.key
	I0729 17:51:55.087950   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 17:51:55.087972   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 17:51:55.087990   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 17:51:55.088007   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 17:51:55.088023   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 17:51:55.088042   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 17:51:55.088060   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 17:51:55.088078   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 17:51:55.088145   48198 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 17:51:55.088186   48198 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 17:51:55.088199   48198 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:51:55.088230   48198 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:51:55.088263   48198 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:51:55.088295   48198 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 17:51:55.088346   48198 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:51:55.088383   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /usr/share/ca-certificates/183932.pem
	I0729 17:51:55.088404   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:55.088422   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem -> /usr/share/ca-certificates/18393.pem
	I0729 17:51:55.089082   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:51:55.114185   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:51:55.137738   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:51:55.161184   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:51:55.185348   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 17:51:55.209100   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 17:51:55.232524   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:51:55.255926   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:51:55.279764   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 17:51:55.303484   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:51:55.328159   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 17:51:55.351624   48198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 17:51:55.367514   48198 ssh_runner.go:195] Run: openssl version
	I0729 17:51:55.373185   48198 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 17:51:55.373252   48198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 17:51:55.383877   48198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 17:51:55.388302   48198 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 17:51:55.388329   48198 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 17:51:55.388362   48198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 17:51:55.393815   48198 command_runner.go:130] > 3ec20f2e
	I0729 17:51:55.393885   48198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:51:55.402943   48198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:51:55.413336   48198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:55.418157   48198 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:55.418409   48198 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:55.418484   48198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:55.423838   48198 command_runner.go:130] > b5213941
	I0729 17:51:55.424086   48198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:51:55.433374   48198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 17:51:55.443924   48198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 17:51:55.448188   48198 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 17:51:55.448325   48198 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 17:51:55.448375   48198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 17:51:55.453780   48198 command_runner.go:130] > 51391683
	I0729 17:51:55.453844   48198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 17:51:55.462747   48198 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:51:55.466995   48198 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:51:55.467015   48198 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 17:51:55.467021   48198 command_runner.go:130] > Device: 253,1	Inode: 4197931     Links: 1
	I0729 17:51:55.467027   48198 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 17:51:55.467034   48198 command_runner.go:130] > Access: 2024-07-29 17:45:07.159583524 +0000
	I0729 17:51:55.467038   48198 command_runner.go:130] > Modify: 2024-07-29 17:45:07.159583524 +0000
	I0729 17:51:55.467043   48198 command_runner.go:130] > Change: 2024-07-29 17:45:07.159583524 +0000
	I0729 17:51:55.467048   48198 command_runner.go:130] >  Birth: 2024-07-29 17:45:07.159583524 +0000
	I0729 17:51:55.467245   48198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 17:51:55.472705   48198 command_runner.go:130] > Certificate will not expire
	I0729 17:51:55.472769   48198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 17:51:55.478549   48198 command_runner.go:130] > Certificate will not expire
	I0729 17:51:55.478603   48198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 17:51:55.484042   48198 command_runner.go:130] > Certificate will not expire
	I0729 17:51:55.484234   48198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 17:51:55.489741   48198 command_runner.go:130] > Certificate will not expire
	I0729 17:51:55.489804   48198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 17:51:55.494967   48198 command_runner.go:130] > Certificate will not expire
	I0729 17:51:55.495242   48198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 17:51:55.500489   48198 command_runner.go:130] > Certificate will not expire
	I0729 17:51:55.500729   48198 kubeadm.go:392] StartCluster: {Name:multinode-602258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-602258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:51:55.500869   48198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 17:51:55.500921   48198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 17:51:55.540098   48198 command_runner.go:130] > 2322d2050e81876dd49cf2144c141ed8528c2c549e7dd61e0c48e22f29649330
	I0729 17:51:55.540128   48198 command_runner.go:130] > 7416acdd88a7db531e03ee5767e76470f46b78db5c45ef842821db5036503517
	I0729 17:51:55.540134   48198 command_runner.go:130] > d7615041ffc1aef8098e347b0f7f11240291b7814bf03e59b1e336bc7d7bb7c0
	I0729 17:51:55.540140   48198 command_runner.go:130] > 864297549b1272800bfebdd28175a349b3ee8ef7c7bbd78c771eaad9e02b25cc
	I0729 17:51:55.540149   48198 command_runner.go:130] > e82eb1db29cc5f5d61344dd7ed6985093a7f202f4cdac7abf12ba859cab24ac6
	I0729 17:51:55.540157   48198 command_runner.go:130] > 07fee3a17c566e898bf4bda366cd3fef0865591a42bdfbf1d81036e598ac14ce
	I0729 17:51:55.540167   48198 command_runner.go:130] > 6e7844975c2969533b47b744772eb44171fe78572163dc172999a28d44fcf4ee
	I0729 17:51:55.540177   48198 command_runner.go:130] > 1f624d4b42189dfe667a9aad521e37765c1b61fa5a1300f05f7a937db2c6a6fa
	I0729 17:51:55.541408   48198 cri.go:89] found id: "2322d2050e81876dd49cf2144c141ed8528c2c549e7dd61e0c48e22f29649330"
	I0729 17:51:55.541427   48198 cri.go:89] found id: "7416acdd88a7db531e03ee5767e76470f46b78db5c45ef842821db5036503517"
	I0729 17:51:55.541434   48198 cri.go:89] found id: "d7615041ffc1aef8098e347b0f7f11240291b7814bf03e59b1e336bc7d7bb7c0"
	I0729 17:51:55.541439   48198 cri.go:89] found id: "864297549b1272800bfebdd28175a349b3ee8ef7c7bbd78c771eaad9e02b25cc"
	I0729 17:51:55.541442   48198 cri.go:89] found id: "e82eb1db29cc5f5d61344dd7ed6985093a7f202f4cdac7abf12ba859cab24ac6"
	I0729 17:51:55.541447   48198 cri.go:89] found id: "07fee3a17c566e898bf4bda366cd3fef0865591a42bdfbf1d81036e598ac14ce"
	I0729 17:51:55.541451   48198 cri.go:89] found id: "6e7844975c2969533b47b744772eb44171fe78572163dc172999a28d44fcf4ee"
	I0729 17:51:55.541455   48198 cri.go:89] found id: "1f624d4b42189dfe667a9aad521e37765c1b61fa5a1300f05f7a937db2c6a6fa"
	I0729 17:51:55.541459   48198 cri.go:89] found id: ""
	I0729 17:51:55.541506   48198 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 17:53:39 multinode-602258 crio[2883]: time="2024-07-29 17:53:39.924672159Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722275619924650533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40152153-554a-4038-8c9c-987c14af62f1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:53:39 multinode-602258 crio[2883]: time="2024-07-29 17:53:39.925422633Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=146b366d-7fb0-4d11-ae4f-d13126a86b69 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:53:39 multinode-602258 crio[2883]: time="2024-07-29 17:53:39.925494596Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=146b366d-7fb0-4d11-ae4f-d13126a86b69 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:53:39 multinode-602258 crio[2883]: time="2024-07-29 17:53:39.925845781Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94d253c5fc168cec3e4a288f1ff892488ebb63bd43a64cb2b364daeef4a42092,PodSandboxId:ecba257d3f2a1cd221ec4dd1fb5570367cb55f9177de6c5bdf28b9fb345816e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722275556517307823,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kqrzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c31cd36-a917-4a07-a18f-887c7defa6e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1799863e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3426a69c32cd243e491fea11995a1bf631263c68a605d31e6a21b97ce4d0ac4b,PodSandboxId:3cda1a27a1fe30cd41fb7cc9a711e9d0de01ff7422bf7c70ce897f67901c7a7b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722275523193375217,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-68dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700c5f4f-8bac-4a69-8174-0b8a80c4e831,},Annotations:map[string]string{io.kubernetes.container.hash: ef2bce6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a7ed335c280800a15790e3d10b98a97080fe5a90197bbc732c626cfdd89f67a,PodSandboxId:b44765ca91bab1000cb666b4a381cef645dd4299547f17c543e4595f9e2277a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275523118117406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7fmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbbeed00-0740-41dc-b9f2-aa03336074ac,},Annotations:map[string]string{io.kubernetes.container.hash: be4c111a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1a3774da5a1b300885fa869b6ee486244da9331c7388294b88d4cc568c1065,PodSandboxId:cb284998c1c34179968f91ccb58ab4392545eca3a67410e7f46cb24d6acdd73d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722275522962364647,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shhsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8951fee7-e31c-401a-8688-79487ea5fc64,},Annotations:map[string]
string{io.kubernetes.container.hash: 5dbfc197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171b21ffde4794280075b3f5b4b787a263f302a1fb712bb15b69cba1cefe437d,PodSandboxId:73fc019512997562eddb104934baa7fb9afdb42ec0424043d15835a1801a723d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722275522884930368,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dee56b25-3f87-483c-8fda-95989162e3ba,},Annotations:map[string]string{io.ku
bernetes.container.hash: cdf99c0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b655e5789fb522e6dae942b61ff483971a837d40607e3530bc9c1ae524e627e1,PodSandboxId:356e8fa20bee0469a0aa2c1a1c427a032fea595ab55791571b243d5dd1895e79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722275518023760140,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91128e09c1b43339a5edc267d8a2607c,},Annotations:map[string]string{io.kubernetes.container.hash: 93b97758,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619d14875058f9bdafffdb8f819f0dbada1c276a2b0c1a22286f2a986be363bf,PodSandboxId:8716f9d02e68514d22182809651ba202d94a52d2e187db3894ee2c47c4a3282c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722275517994184016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74794ee3688afe14ba4fbb763c9f1f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3c8ef53e88607de4cee229954a3b64b2c31ca195b03e2e83e6b390b674f06a,PodSandboxId:e574e4d95b35c760042756833f92daa0a5169e957cf29184fedb43ab6fedaa71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722275517930270247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfb26bf40408c8779988df3a1b3dbe66,},Annotations:map[string]string{io.kubernetes.container.hash: 76ddb303,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f75287ad92d6d5f2e5b7da85de7858362a02e33879b2c77184aafed885e2e0d,PodSandboxId:72e988dc049fa83ebf7b5971055dadbc6fe84fe29f0c3ee4d33c920aa33d0ef4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722275517908930153,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 987b84de9f64c76a1f7b604c11dc5ffd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f87e33e730abde22af3b74af0ab2bd17918944204a4b158207fa583b003b10b9,PodSandboxId:37d9e64592cd7aaa1505dcddccbc5dd067152f471067fd6873fddcbaf957738a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722275199589060401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kqrzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c31cd36-a917-4a07-a18f-887c7defa6e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1799863e,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2322d2050e81876dd49cf2144c141ed8528c2c549e7dd61e0c48e22f29649330,PodSandboxId:ee8980b3e7f2b194bf74c3732dd35f28032437e5a232010aa7d6c37542186709,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275144901037514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7fmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbbeed00-0740-41dc-b9f2-aa03336074ac,},Annotations:map[string]string{io.kubernetes.container.hash: be4c111a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7416acdd88a7db531e03ee5767e76470f46b78db5c45ef842821db5036503517,PodSandboxId:444cd6c8b4e011244c073f7937a14b413182cb3cea8fc4ffafab8ea2fe27b8d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722275144858336451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: dee56b25-3f87-483c-8fda-95989162e3ba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf99c0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7615041ffc1aef8098e347b0f7f11240291b7814bf03e59b1e336bc7d7bb7c0,PodSandboxId:fd87ad0c3835a9427b7071df83e4698fd7627224603c974a5578338a3878b88c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722275133246875325,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-68dnv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 700c5f4f-8bac-4a69-8174-0b8a80c4e831,},Annotations:map[string]string{io.kubernetes.container.hash: ef2bce6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864297549b1272800bfebdd28175a349b3ee8ef7c7bbd78c771eaad9e02b25cc,PodSandboxId:df1d9d917d4d21947539533f40d12eb1b40d87fe3253db110c799825ab064153,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722275131100162272,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shhsx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8951fee7-e31c-401a-8688-79487ea5fc64,},Annotations:map[string]string{io.kubernetes.container.hash: 5dbfc197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82eb1db29cc5f5d61344dd7ed6985093a7f202f4cdac7abf12ba859cab24ac6,PodSandboxId:fd621ebddac51317684d0de146954e509296749a3949dffd5d6da406fa9f7efd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722275111299056055,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
74794ee3688afe14ba4fbb763c9f1f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7844975c2969533b47b744772eb44171fe78572163dc172999a28d44fcf4ee,PodSandboxId:b63a4e8c712bf07ba90c52a91a51b4a32c0505aab36ca15f5a09f7d3a15117b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722275111235776829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91128e09c1b43339a5edc267d8a2607c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 93b97758,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07fee3a17c566e898bf4bda366cd3fef0865591a42bdfbf1d81036e598ac14ce,PodSandboxId:d573150cedc5641a9fb0a3a4cfb625233a2f4626ead42fee1318995d61551222,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722275111245721916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfb26bf40408c8779988df3a1b3dbe66,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 76ddb303,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f624d4b42189dfe667a9aad521e37765c1b61fa5a1300f05f7a937db2c6a6fa,PodSandboxId:da889349b706f3293f3f67b781023c77afe739743aceecb64997b2e77d5d49a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722275111216522794,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 987b84de9f64c76a1f7b604c11dc5ffd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=146b366d-7fb0-4d11-ae4f-d13126a86b69 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:53:39 multinode-602258 crio[2883]: time="2024-07-29 17:53:39.967007570Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51895f26-75f9-461e-886b-53391b24356f name=/runtime.v1.RuntimeService/Version
	Jul 29 17:53:39 multinode-602258 crio[2883]: time="2024-07-29 17:53:39.967077268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51895f26-75f9-461e-886b-53391b24356f name=/runtime.v1.RuntimeService/Version
	Jul 29 17:53:39 multinode-602258 crio[2883]: time="2024-07-29 17:53:39.967959860Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50671dec-97c4-428f-9ab3-2916018c0c65 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:53:39 multinode-602258 crio[2883]: time="2024-07-29 17:53:39.968522834Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722275619968500797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50671dec-97c4-428f-9ab3-2916018c0c65 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:53:39 multinode-602258 crio[2883]: time="2024-07-29 17:53:39.968912215Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea41b256-594d-47ce-a059-f844e69a675f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:53:39 multinode-602258 crio[2883]: time="2024-07-29 17:53:39.968984392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea41b256-594d-47ce-a059-f844e69a675f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:53:39 multinode-602258 crio[2883]: time="2024-07-29 17:53:39.969368355Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94d253c5fc168cec3e4a288f1ff892488ebb63bd43a64cb2b364daeef4a42092,PodSandboxId:ecba257d3f2a1cd221ec4dd1fb5570367cb55f9177de6c5bdf28b9fb345816e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722275556517307823,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kqrzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c31cd36-a917-4a07-a18f-887c7defa6e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1799863e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3426a69c32cd243e491fea11995a1bf631263c68a605d31e6a21b97ce4d0ac4b,PodSandboxId:3cda1a27a1fe30cd41fb7cc9a711e9d0de01ff7422bf7c70ce897f67901c7a7b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722275523193375217,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-68dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700c5f4f-8bac-4a69-8174-0b8a80c4e831,},Annotations:map[string]string{io.kubernetes.container.hash: ef2bce6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a7ed335c280800a15790e3d10b98a97080fe5a90197bbc732c626cfdd89f67a,PodSandboxId:b44765ca91bab1000cb666b4a381cef645dd4299547f17c543e4595f9e2277a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275523118117406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7fmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbbeed00-0740-41dc-b9f2-aa03336074ac,},Annotations:map[string]string{io.kubernetes.container.hash: be4c111a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1a3774da5a1b300885fa869b6ee486244da9331c7388294b88d4cc568c1065,PodSandboxId:cb284998c1c34179968f91ccb58ab4392545eca3a67410e7f46cb24d6acdd73d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722275522962364647,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shhsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8951fee7-e31c-401a-8688-79487ea5fc64,},Annotations:map[string]
string{io.kubernetes.container.hash: 5dbfc197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171b21ffde4794280075b3f5b4b787a263f302a1fb712bb15b69cba1cefe437d,PodSandboxId:73fc019512997562eddb104934baa7fb9afdb42ec0424043d15835a1801a723d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722275522884930368,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dee56b25-3f87-483c-8fda-95989162e3ba,},Annotations:map[string]string{io.ku
bernetes.container.hash: cdf99c0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b655e5789fb522e6dae942b61ff483971a837d40607e3530bc9c1ae524e627e1,PodSandboxId:356e8fa20bee0469a0aa2c1a1c427a032fea595ab55791571b243d5dd1895e79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722275518023760140,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91128e09c1b43339a5edc267d8a2607c,},Annotations:map[string]string{io.kubernetes.container.hash: 93b97758,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619d14875058f9bdafffdb8f819f0dbada1c276a2b0c1a22286f2a986be363bf,PodSandboxId:8716f9d02e68514d22182809651ba202d94a52d2e187db3894ee2c47c4a3282c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722275517994184016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74794ee3688afe14ba4fbb763c9f1f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3c8ef53e88607de4cee229954a3b64b2c31ca195b03e2e83e6b390b674f06a,PodSandboxId:e574e4d95b35c760042756833f92daa0a5169e957cf29184fedb43ab6fedaa71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722275517930270247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfb26bf40408c8779988df3a1b3dbe66,},Annotations:map[string]string{io.kubernetes.container.hash: 76ddb303,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f75287ad92d6d5f2e5b7da85de7858362a02e33879b2c77184aafed885e2e0d,PodSandboxId:72e988dc049fa83ebf7b5971055dadbc6fe84fe29f0c3ee4d33c920aa33d0ef4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722275517908930153,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 987b84de9f64c76a1f7b604c11dc5ffd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f87e33e730abde22af3b74af0ab2bd17918944204a4b158207fa583b003b10b9,PodSandboxId:37d9e64592cd7aaa1505dcddccbc5dd067152f471067fd6873fddcbaf957738a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722275199589060401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kqrzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c31cd36-a917-4a07-a18f-887c7defa6e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1799863e,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2322d2050e81876dd49cf2144c141ed8528c2c549e7dd61e0c48e22f29649330,PodSandboxId:ee8980b3e7f2b194bf74c3732dd35f28032437e5a232010aa7d6c37542186709,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275144901037514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7fmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbbeed00-0740-41dc-b9f2-aa03336074ac,},Annotations:map[string]string{io.kubernetes.container.hash: be4c111a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7416acdd88a7db531e03ee5767e76470f46b78db5c45ef842821db5036503517,PodSandboxId:444cd6c8b4e011244c073f7937a14b413182cb3cea8fc4ffafab8ea2fe27b8d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722275144858336451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: dee56b25-3f87-483c-8fda-95989162e3ba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf99c0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7615041ffc1aef8098e347b0f7f11240291b7814bf03e59b1e336bc7d7bb7c0,PodSandboxId:fd87ad0c3835a9427b7071df83e4698fd7627224603c974a5578338a3878b88c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722275133246875325,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-68dnv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 700c5f4f-8bac-4a69-8174-0b8a80c4e831,},Annotations:map[string]string{io.kubernetes.container.hash: ef2bce6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864297549b1272800bfebdd28175a349b3ee8ef7c7bbd78c771eaad9e02b25cc,PodSandboxId:df1d9d917d4d21947539533f40d12eb1b40d87fe3253db110c799825ab064153,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722275131100162272,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shhsx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8951fee7-e31c-401a-8688-79487ea5fc64,},Annotations:map[string]string{io.kubernetes.container.hash: 5dbfc197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82eb1db29cc5f5d61344dd7ed6985093a7f202f4cdac7abf12ba859cab24ac6,PodSandboxId:fd621ebddac51317684d0de146954e509296749a3949dffd5d6da406fa9f7efd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722275111299056055,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
74794ee3688afe14ba4fbb763c9f1f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7844975c2969533b47b744772eb44171fe78572163dc172999a28d44fcf4ee,PodSandboxId:b63a4e8c712bf07ba90c52a91a51b4a32c0505aab36ca15f5a09f7d3a15117b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722275111235776829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91128e09c1b43339a5edc267d8a2607c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 93b97758,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07fee3a17c566e898bf4bda366cd3fef0865591a42bdfbf1d81036e598ac14ce,PodSandboxId:d573150cedc5641a9fb0a3a4cfb625233a2f4626ead42fee1318995d61551222,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722275111245721916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfb26bf40408c8779988df3a1b3dbe66,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 76ddb303,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f624d4b42189dfe667a9aad521e37765c1b61fa5a1300f05f7a937db2c6a6fa,PodSandboxId:da889349b706f3293f3f67b781023c77afe739743aceecb64997b2e77d5d49a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722275111216522794,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 987b84de9f64c76a1f7b604c11dc5ffd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea41b256-594d-47ce-a059-f844e69a675f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:53:40 multinode-602258 crio[2883]: time="2024-07-29 17:53:40.010803146Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e86fd73e-ef27-47d4-9d66-09bcb2b9df16 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:53:40 multinode-602258 crio[2883]: time="2024-07-29 17:53:40.010895598Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e86fd73e-ef27-47d4-9d66-09bcb2b9df16 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:53:40 multinode-602258 crio[2883]: time="2024-07-29 17:53:40.011976316Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d5e99723-6067-41f2-91f6-02d7bba9cad2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:53:40 multinode-602258 crio[2883]: time="2024-07-29 17:53:40.012614406Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722275620012587856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5e99723-6067-41f2-91f6-02d7bba9cad2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:53:40 multinode-602258 crio[2883]: time="2024-07-29 17:53:40.013433869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=866ccb8d-b590-445b-8091-df86452fda12 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:53:40 multinode-602258 crio[2883]: time="2024-07-29 17:53:40.013549880Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=866ccb8d-b590-445b-8091-df86452fda12 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:53:40 multinode-602258 crio[2883]: time="2024-07-29 17:53:40.013874295Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94d253c5fc168cec3e4a288f1ff892488ebb63bd43a64cb2b364daeef4a42092,PodSandboxId:ecba257d3f2a1cd221ec4dd1fb5570367cb55f9177de6c5bdf28b9fb345816e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722275556517307823,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kqrzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c31cd36-a917-4a07-a18f-887c7defa6e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1799863e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3426a69c32cd243e491fea11995a1bf631263c68a605d31e6a21b97ce4d0ac4b,PodSandboxId:3cda1a27a1fe30cd41fb7cc9a711e9d0de01ff7422bf7c70ce897f67901c7a7b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722275523193375217,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-68dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700c5f4f-8bac-4a69-8174-0b8a80c4e831,},Annotations:map[string]string{io.kubernetes.container.hash: ef2bce6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a7ed335c280800a15790e3d10b98a97080fe5a90197bbc732c626cfdd89f67a,PodSandboxId:b44765ca91bab1000cb666b4a381cef645dd4299547f17c543e4595f9e2277a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275523118117406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7fmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbbeed00-0740-41dc-b9f2-aa03336074ac,},Annotations:map[string]string{io.kubernetes.container.hash: be4c111a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1a3774da5a1b300885fa869b6ee486244da9331c7388294b88d4cc568c1065,PodSandboxId:cb284998c1c34179968f91ccb58ab4392545eca3a67410e7f46cb24d6acdd73d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722275522962364647,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shhsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8951fee7-e31c-401a-8688-79487ea5fc64,},Annotations:map[string]
string{io.kubernetes.container.hash: 5dbfc197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171b21ffde4794280075b3f5b4b787a263f302a1fb712bb15b69cba1cefe437d,PodSandboxId:73fc019512997562eddb104934baa7fb9afdb42ec0424043d15835a1801a723d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722275522884930368,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dee56b25-3f87-483c-8fda-95989162e3ba,},Annotations:map[string]string{io.ku
bernetes.container.hash: cdf99c0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b655e5789fb522e6dae942b61ff483971a837d40607e3530bc9c1ae524e627e1,PodSandboxId:356e8fa20bee0469a0aa2c1a1c427a032fea595ab55791571b243d5dd1895e79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722275518023760140,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91128e09c1b43339a5edc267d8a2607c,},Annotations:map[string]string{io.kubernetes.container.hash: 93b97758,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619d14875058f9bdafffdb8f819f0dbada1c276a2b0c1a22286f2a986be363bf,PodSandboxId:8716f9d02e68514d22182809651ba202d94a52d2e187db3894ee2c47c4a3282c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722275517994184016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74794ee3688afe14ba4fbb763c9f1f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3c8ef53e88607de4cee229954a3b64b2c31ca195b03e2e83e6b390b674f06a,PodSandboxId:e574e4d95b35c760042756833f92daa0a5169e957cf29184fedb43ab6fedaa71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722275517930270247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfb26bf40408c8779988df3a1b3dbe66,},Annotations:map[string]string{io.kubernetes.container.hash: 76ddb303,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f75287ad92d6d5f2e5b7da85de7858362a02e33879b2c77184aafed885e2e0d,PodSandboxId:72e988dc049fa83ebf7b5971055dadbc6fe84fe29f0c3ee4d33c920aa33d0ef4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722275517908930153,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 987b84de9f64c76a1f7b604c11dc5ffd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f87e33e730abde22af3b74af0ab2bd17918944204a4b158207fa583b003b10b9,PodSandboxId:37d9e64592cd7aaa1505dcddccbc5dd067152f471067fd6873fddcbaf957738a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722275199589060401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kqrzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c31cd36-a917-4a07-a18f-887c7defa6e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1799863e,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2322d2050e81876dd49cf2144c141ed8528c2c549e7dd61e0c48e22f29649330,PodSandboxId:ee8980b3e7f2b194bf74c3732dd35f28032437e5a232010aa7d6c37542186709,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275144901037514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7fmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbbeed00-0740-41dc-b9f2-aa03336074ac,},Annotations:map[string]string{io.kubernetes.container.hash: be4c111a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7416acdd88a7db531e03ee5767e76470f46b78db5c45ef842821db5036503517,PodSandboxId:444cd6c8b4e011244c073f7937a14b413182cb3cea8fc4ffafab8ea2fe27b8d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722275144858336451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: dee56b25-3f87-483c-8fda-95989162e3ba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf99c0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7615041ffc1aef8098e347b0f7f11240291b7814bf03e59b1e336bc7d7bb7c0,PodSandboxId:fd87ad0c3835a9427b7071df83e4698fd7627224603c974a5578338a3878b88c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722275133246875325,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-68dnv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 700c5f4f-8bac-4a69-8174-0b8a80c4e831,},Annotations:map[string]string{io.kubernetes.container.hash: ef2bce6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864297549b1272800bfebdd28175a349b3ee8ef7c7bbd78c771eaad9e02b25cc,PodSandboxId:df1d9d917d4d21947539533f40d12eb1b40d87fe3253db110c799825ab064153,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722275131100162272,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shhsx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8951fee7-e31c-401a-8688-79487ea5fc64,},Annotations:map[string]string{io.kubernetes.container.hash: 5dbfc197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82eb1db29cc5f5d61344dd7ed6985093a7f202f4cdac7abf12ba859cab24ac6,PodSandboxId:fd621ebddac51317684d0de146954e509296749a3949dffd5d6da406fa9f7efd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722275111299056055,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
74794ee3688afe14ba4fbb763c9f1f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7844975c2969533b47b744772eb44171fe78572163dc172999a28d44fcf4ee,PodSandboxId:b63a4e8c712bf07ba90c52a91a51b4a32c0505aab36ca15f5a09f7d3a15117b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722275111235776829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91128e09c1b43339a5edc267d8a2607c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 93b97758,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07fee3a17c566e898bf4bda366cd3fef0865591a42bdfbf1d81036e598ac14ce,PodSandboxId:d573150cedc5641a9fb0a3a4cfb625233a2f4626ead42fee1318995d61551222,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722275111245721916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfb26bf40408c8779988df3a1b3dbe66,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 76ddb303,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f624d4b42189dfe667a9aad521e37765c1b61fa5a1300f05f7a937db2c6a6fa,PodSandboxId:da889349b706f3293f3f67b781023c77afe739743aceecb64997b2e77d5d49a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722275111216522794,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 987b84de9f64c76a1f7b604c11dc5ffd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=866ccb8d-b590-445b-8091-df86452fda12 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:53:40 multinode-602258 crio[2883]: time="2024-07-29 17:53:40.053646806Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df0e54ba-589b-4870-b0b8-6bb4fb805335 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:53:40 multinode-602258 crio[2883]: time="2024-07-29 17:53:40.053739121Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df0e54ba-589b-4870-b0b8-6bb4fb805335 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:53:40 multinode-602258 crio[2883]: time="2024-07-29 17:53:40.054596690Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47e85a31-cfff-466c-b88b-5a5c10be2c09 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:53:40 multinode-602258 crio[2883]: time="2024-07-29 17:53:40.055036076Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722275620055007126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47e85a31-cfff-466c-b88b-5a5c10be2c09 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:53:40 multinode-602258 crio[2883]: time="2024-07-29 17:53:40.055521950Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3de47aca-b811-43e0-b9e0-656c6a40a0d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:53:40 multinode-602258 crio[2883]: time="2024-07-29 17:53:40.055593566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3de47aca-b811-43e0-b9e0-656c6a40a0d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:53:40 multinode-602258 crio[2883]: time="2024-07-29 17:53:40.058442748Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94d253c5fc168cec3e4a288f1ff892488ebb63bd43a64cb2b364daeef4a42092,PodSandboxId:ecba257d3f2a1cd221ec4dd1fb5570367cb55f9177de6c5bdf28b9fb345816e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722275556517307823,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kqrzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c31cd36-a917-4a07-a18f-887c7defa6e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1799863e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3426a69c32cd243e491fea11995a1bf631263c68a605d31e6a21b97ce4d0ac4b,PodSandboxId:3cda1a27a1fe30cd41fb7cc9a711e9d0de01ff7422bf7c70ce897f67901c7a7b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722275523193375217,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-68dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700c5f4f-8bac-4a69-8174-0b8a80c4e831,},Annotations:map[string]string{io.kubernetes.container.hash: ef2bce6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a7ed335c280800a15790e3d10b98a97080fe5a90197bbc732c626cfdd89f67a,PodSandboxId:b44765ca91bab1000cb666b4a381cef645dd4299547f17c543e4595f9e2277a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275523118117406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7fmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbbeed00-0740-41dc-b9f2-aa03336074ac,},Annotations:map[string]string{io.kubernetes.container.hash: be4c111a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1a3774da5a1b300885fa869b6ee486244da9331c7388294b88d4cc568c1065,PodSandboxId:cb284998c1c34179968f91ccb58ab4392545eca3a67410e7f46cb24d6acdd73d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722275522962364647,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shhsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8951fee7-e31c-401a-8688-79487ea5fc64,},Annotations:map[string]
string{io.kubernetes.container.hash: 5dbfc197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171b21ffde4794280075b3f5b4b787a263f302a1fb712bb15b69cba1cefe437d,PodSandboxId:73fc019512997562eddb104934baa7fb9afdb42ec0424043d15835a1801a723d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722275522884930368,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dee56b25-3f87-483c-8fda-95989162e3ba,},Annotations:map[string]string{io.ku
bernetes.container.hash: cdf99c0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b655e5789fb522e6dae942b61ff483971a837d40607e3530bc9c1ae524e627e1,PodSandboxId:356e8fa20bee0469a0aa2c1a1c427a032fea595ab55791571b243d5dd1895e79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722275518023760140,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91128e09c1b43339a5edc267d8a2607c,},Annotations:map[string]string{io.kubernetes.container.hash: 93b97758,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619d14875058f9bdafffdb8f819f0dbada1c276a2b0c1a22286f2a986be363bf,PodSandboxId:8716f9d02e68514d22182809651ba202d94a52d2e187db3894ee2c47c4a3282c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722275517994184016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74794ee3688afe14ba4fbb763c9f1f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3c8ef53e88607de4cee229954a3b64b2c31ca195b03e2e83e6b390b674f06a,PodSandboxId:e574e4d95b35c760042756833f92daa0a5169e957cf29184fedb43ab6fedaa71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722275517930270247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfb26bf40408c8779988df3a1b3dbe66,},Annotations:map[string]string{io.kubernetes.container.hash: 76ddb303,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f75287ad92d6d5f2e5b7da85de7858362a02e33879b2c77184aafed885e2e0d,PodSandboxId:72e988dc049fa83ebf7b5971055dadbc6fe84fe29f0c3ee4d33c920aa33d0ef4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722275517908930153,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 987b84de9f64c76a1f7b604c11dc5ffd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f87e33e730abde22af3b74af0ab2bd17918944204a4b158207fa583b003b10b9,PodSandboxId:37d9e64592cd7aaa1505dcddccbc5dd067152f471067fd6873fddcbaf957738a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722275199589060401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kqrzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c31cd36-a917-4a07-a18f-887c7defa6e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1799863e,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2322d2050e81876dd49cf2144c141ed8528c2c549e7dd61e0c48e22f29649330,PodSandboxId:ee8980b3e7f2b194bf74c3732dd35f28032437e5a232010aa7d6c37542186709,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275144901037514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7fmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbbeed00-0740-41dc-b9f2-aa03336074ac,},Annotations:map[string]string{io.kubernetes.container.hash: be4c111a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7416acdd88a7db531e03ee5767e76470f46b78db5c45ef842821db5036503517,PodSandboxId:444cd6c8b4e011244c073f7937a14b413182cb3cea8fc4ffafab8ea2fe27b8d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722275144858336451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: dee56b25-3f87-483c-8fda-95989162e3ba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf99c0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7615041ffc1aef8098e347b0f7f11240291b7814bf03e59b1e336bc7d7bb7c0,PodSandboxId:fd87ad0c3835a9427b7071df83e4698fd7627224603c974a5578338a3878b88c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722275133246875325,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-68dnv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 700c5f4f-8bac-4a69-8174-0b8a80c4e831,},Annotations:map[string]string{io.kubernetes.container.hash: ef2bce6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864297549b1272800bfebdd28175a349b3ee8ef7c7bbd78c771eaad9e02b25cc,PodSandboxId:df1d9d917d4d21947539533f40d12eb1b40d87fe3253db110c799825ab064153,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722275131100162272,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shhsx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8951fee7-e31c-401a-8688-79487ea5fc64,},Annotations:map[string]string{io.kubernetes.container.hash: 5dbfc197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82eb1db29cc5f5d61344dd7ed6985093a7f202f4cdac7abf12ba859cab24ac6,PodSandboxId:fd621ebddac51317684d0de146954e509296749a3949dffd5d6da406fa9f7efd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722275111299056055,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
74794ee3688afe14ba4fbb763c9f1f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7844975c2969533b47b744772eb44171fe78572163dc172999a28d44fcf4ee,PodSandboxId:b63a4e8c712bf07ba90c52a91a51b4a32c0505aab36ca15f5a09f7d3a15117b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722275111235776829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91128e09c1b43339a5edc267d8a2607c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 93b97758,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07fee3a17c566e898bf4bda366cd3fef0865591a42bdfbf1d81036e598ac14ce,PodSandboxId:d573150cedc5641a9fb0a3a4cfb625233a2f4626ead42fee1318995d61551222,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722275111245721916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfb26bf40408c8779988df3a1b3dbe66,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 76ddb303,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f624d4b42189dfe667a9aad521e37765c1b61fa5a1300f05f7a937db2c6a6fa,PodSandboxId:da889349b706f3293f3f67b781023c77afe739743aceecb64997b2e77d5d49a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722275111216522794,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 987b84de9f64c76a1f7b604c11dc5ffd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3de47aca-b811-43e0-b9e0-656c6a40a0d1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	94d253c5fc168       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   ecba257d3f2a1       busybox-fc5497c4f-kqrzf
	3426a69c32cd2       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   3cda1a27a1fe3       kindnet-68dnv
	9a7ed335c2808       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   b44765ca91bab       coredns-7db6d8ff4d-b7fmn
	bf1a3774da5a1       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   cb284998c1c34       kube-proxy-shhsx
	171b21ffde479       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   73fc019512997       storage-provisioner
	b655e5789fb52       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   356e8fa20bee0       etcd-multinode-602258
	619d14875058f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   8716f9d02e685       kube-scheduler-multinode-602258
	fd3c8ef53e886       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   e574e4d95b35c       kube-apiserver-multinode-602258
	7f75287ad92d6       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   72e988dc049fa       kube-controller-manager-multinode-602258
	f87e33e730abd       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   37d9e64592cd7       busybox-fc5497c4f-kqrzf
	2322d2050e818       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   ee8980b3e7f2b       coredns-7db6d8ff4d-b7fmn
	7416acdd88a7d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   444cd6c8b4e01       storage-provisioner
	d7615041ffc1a       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   fd87ad0c3835a       kindnet-68dnv
	864297549b127       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   df1d9d917d4d2       kube-proxy-shhsx
	e82eb1db29cc5       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   fd621ebddac51       kube-scheduler-multinode-602258
	07fee3a17c566       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   d573150cedc56       kube-apiserver-multinode-602258
	6e7844975c296       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   b63a4e8c712bf       etcd-multinode-602258
	1f624d4b42189       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   da889349b706f       kube-controller-manager-multinode-602258
	
	
	==> coredns [2322d2050e81876dd49cf2144c141ed8528c2c549e7dd61e0c48e22f29649330] <==
	[INFO] 10.244.1.2:35429 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001692093s
	[INFO] 10.244.1.2:37285 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009108s
	[INFO] 10.244.1.2:56622 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127112s
	[INFO] 10.244.1.2:45288 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001042633s
	[INFO] 10.244.1.2:52803 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059381s
	[INFO] 10.244.1.2:54071 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130426s
	[INFO] 10.244.1.2:39702 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006405s
	[INFO] 10.244.0.3:50417 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077697s
	[INFO] 10.244.0.3:50628 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088326s
	[INFO] 10.244.0.3:58676 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000035996s
	[INFO] 10.244.0.3:52001 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000030324s
	[INFO] 10.244.1.2:56675 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123978s
	[INFO] 10.244.1.2:43659 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119779s
	[INFO] 10.244.1.2:52711 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075917s
	[INFO] 10.244.1.2:45351 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067272s
	[INFO] 10.244.0.3:52683 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076616s
	[INFO] 10.244.0.3:38420 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082376s
	[INFO] 10.244.0.3:44768 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000066627s
	[INFO] 10.244.0.3:33241 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000051007s
	[INFO] 10.244.1.2:57945 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000257298s
	[INFO] 10.244.1.2:50244 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108704s
	[INFO] 10.244.1.2:44884 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082904s
	[INFO] 10.244.1.2:44311 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088912s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9a7ed335c280800a15790e3d10b98a97080fe5a90197bbc732c626cfdd89f67a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45425 - 15121 "HINFO IN 1518611092228989175.7819485505034679445. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025883017s
	
	
	==> describe nodes <==
	Name:               multinode-602258
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-602258
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=multinode-602258
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T17_45_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:45:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-602258
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:53:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:52:01 +0000   Mon, 29 Jul 2024 17:45:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:52:01 +0000   Mon, 29 Jul 2024 17:45:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:52:01 +0000   Mon, 29 Jul 2024 17:45:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:52:01 +0000   Mon, 29 Jul 2024 17:45:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.218
	  Hostname:    multinode-602258
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 012498f1d60e4288b1f7a7707dd783e7
	  System UUID:                012498f1-d60e-4288-b1f7-a7707dd783e7
	  Boot ID:                    a03477c3-feed-4e08-9160-365794e87044
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kqrzf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  kube-system                 coredns-7db6d8ff4d-b7fmn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m10s
	  kube-system                 etcd-multinode-602258                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m24s
	  kube-system                 kindnet-68dnv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m10s
	  kube-system                 kube-apiserver-multinode-602258             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 kube-controller-manager-multinode-602258    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 kube-proxy-shhsx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  kube-system                 kube-scheduler-multinode-602258             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m8s                 kube-proxy       
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m24s                kubelet          Node multinode-602258 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m24s                kubelet          Node multinode-602258 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m24s                kubelet          Node multinode-602258 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m24s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m11s                node-controller  Node multinode-602258 event: Registered Node multinode-602258 in Controller
	  Normal  NodeReady                7m56s                kubelet          Node multinode-602258 status is now: NodeReady
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)  kubelet          Node multinode-602258 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)  kubelet          Node multinode-602258 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)  kubelet          Node multinode-602258 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           86s                  node-controller  Node multinode-602258 event: Registered Node multinode-602258 in Controller
	
	
	Name:               multinode-602258-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-602258-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=multinode-602258
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_52_42_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:52:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-602258-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:53:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:53:12 +0000   Mon, 29 Jul 2024 17:52:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:53:12 +0000   Mon, 29 Jul 2024 17:52:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:53:12 +0000   Mon, 29 Jul 2024 17:52:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:53:12 +0000   Mon, 29 Jul 2024 17:53:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    multinode-602258-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9519401b26cd460a900a42ed1c507ef4
	  System UUID:                9519401b-26cd-460a-900a-42ed1c507ef4
	  Boot ID:                    c6e79f73-c70a-4d9c-b975-69ed661d4cf1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v7xwc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kindnet-cb54x              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m23s
	  kube-system                 kube-proxy-vknqb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 53s                    kube-proxy  
	  Normal  Starting                 7m17s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  7m23s (x2 over 7m23s)  kubelet     Node multinode-602258-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m23s (x2 over 7m23s)  kubelet     Node multinode-602258-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m23s (x2 over 7m23s)  kubelet     Node multinode-602258-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m23s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 7m23s                  kubelet     Starting kubelet.
	  Normal  NodeReady                7m4s                   kubelet     Node multinode-602258-m02 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    58s (x2 over 58s)      kubelet     Node multinode-602258-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x2 over 58s)      kubelet     Node multinode-602258-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  58s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  58s (x2 over 58s)      kubelet     Node multinode-602258-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                40s                    kubelet     Node multinode-602258-m02 status is now: NodeReady
	
	
	Name:               multinode-602258-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-602258-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=multinode-602258
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_53_20_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:53:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-602258-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:53:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:53:37 +0000   Mon, 29 Jul 2024 17:53:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:53:37 +0000   Mon, 29 Jul 2024 17:53:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:53:37 +0000   Mon, 29 Jul 2024 17:53:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:53:37 +0000   Mon, 29 Jul 2024 17:53:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.21
	  Hostname:    multinode-602258-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 224f879324b34c029c6b35e9063d655e
	  System UUID:                224f8793-24b3-4c02-9c6b-35e9063d655e
	  Boot ID:                    b7197a0e-4b1f-4d80-882c-a2c6a66eee56
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jw9gn       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m28s
	  kube-system                 kube-proxy-5txpb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m24s                  kube-proxy  
	  Normal  Starting                 16s                    kube-proxy  
	  Normal  Starting                 5m36s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m29s (x2 over 6m29s)  kubelet     Node multinode-602258-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m29s (x2 over 6m29s)  kubelet     Node multinode-602258-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m29s (x2 over 6m29s)  kubelet     Node multinode-602258-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m28s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m11s                  kubelet     Node multinode-602258-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m41s (x2 over 5m41s)  kubelet     Node multinode-602258-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m41s (x2 over 5m41s)  kubelet     Node multinode-602258-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m41s (x2 over 5m41s)  kubelet     Node multinode-602258-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m24s                  kubelet     Node multinode-602258-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21s (x2 over 21s)      kubelet     Node multinode-602258-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 21s)      kubelet     Node multinode-602258-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 21s)      kubelet     Node multinode-602258-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-602258-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.057526] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065920] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.180315] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.146008] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.277522] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.069706] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +3.920201] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.061457] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.502183] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[  +0.075359] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.454310] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.127586] systemd-fstab-generator[1481]: Ignoring "noauto" option for root device
	[ +13.936945] kauditd_printk_skb: 60 callbacks suppressed
	[Jul29 17:46] kauditd_printk_skb: 12 callbacks suppressed
	[Jul29 17:51] systemd-fstab-generator[2802]: Ignoring "noauto" option for root device
	[  +0.138219] systemd-fstab-generator[2814]: Ignoring "noauto" option for root device
	[  +0.177249] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[  +0.158200] systemd-fstab-generator[2840]: Ignoring "noauto" option for root device
	[  +0.286282] systemd-fstab-generator[2868]: Ignoring "noauto" option for root device
	[  +0.729519] systemd-fstab-generator[2967]: Ignoring "noauto" option for root device
	[  +2.077819] systemd-fstab-generator[3090]: Ignoring "noauto" option for root device
	[Jul29 17:52] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.963818] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.964547] systemd-fstab-generator[3924]: Ignoring "noauto" option for root device
	[ +18.739675] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [6e7844975c2969533b47b744772eb44171fe78572163dc172999a28d44fcf4ee] <==
	{"level":"info","ts":"2024-07-29T17:45:11.765285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5f6aca4c72f5b22 elected leader e5f6aca4c72f5b22 at term 2"}
	{"level":"info","ts":"2024-07-29T17:45:11.769674Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:45:11.774625Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e5f6aca4c72f5b22","local-member-attributes":"{Name:multinode-602258 ClientURLs:[https://192.168.39.218:2379]}","request-path":"/0/members/e5f6aca4c72f5b22/attributes","cluster-id":"2483a61a4a74c1c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T17:45:11.774836Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2483a61a4a74c1c4","local-member-id":"e5f6aca4c72f5b22","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:45:11.775053Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:45:11.77509Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:45:11.775152Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T17:45:11.775744Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T17:45:11.780341Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T17:45:11.780392Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T17:45:11.786821Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.218:2379"}
	{"level":"info","ts":"2024-07-29T17:45:11.789766Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T17:46:17.587416Z","caller":"traceutil/trace.go:171","msg":"trace[1313146405] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"142.700051ms","start":"2024-07-29T17:46:17.444686Z","end":"2024-07-29T17:46:17.587386Z","steps":["trace[1313146405] 'process raft request'  (duration: 134.756885ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:47:12.021342Z","caller":"traceutil/trace.go:171","msg":"trace[281310603] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"164.064708ms","start":"2024-07-29T17:47:11.857143Z","end":"2024-07-29T17:47:12.021208Z","steps":["trace[281310603] 'process raft request'  (duration: 101.977312ms)","trace[281310603] 'compare'  (duration: 61.990741ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T17:47:12.021416Z","caller":"traceutil/trace.go:171","msg":"trace[1350048901] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"164.12933ms","start":"2024-07-29T17:47:11.857266Z","end":"2024-07-29T17:47:12.021396Z","steps":["trace[1350048901] 'process raft request'  (duration: 163.921031ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:50:22.075852Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T17:50:22.075972Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-602258","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.218:2380"],"advertise-client-urls":["https://192.168.39.218:2379"]}
	{"level":"warn","ts":"2024-07-29T17:50:22.076057Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T17:50:22.076133Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T17:50:22.163937Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.218:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T17:50:22.16399Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.218:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T17:50:22.164053Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e5f6aca4c72f5b22","current-leader-member-id":"e5f6aca4c72f5b22"}
	{"level":"info","ts":"2024-07-29T17:50:22.16658Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.218:2380"}
	{"level":"info","ts":"2024-07-29T17:50:22.166762Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.218:2380"}
	{"level":"info","ts":"2024-07-29T17:50:22.1668Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-602258","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.218:2380"],"advertise-client-urls":["https://192.168.39.218:2379"]}
	
	
	==> etcd [b655e5789fb522e6dae942b61ff483971a837d40607e3530bc9c1ae524e627e1] <==
	{"level":"info","ts":"2024-07-29T17:51:58.324975Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T17:51:58.326084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 switched to configuration voters=(16570621702672702242)"}
	{"level":"info","ts":"2024-07-29T17:51:58.326253Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2483a61a4a74c1c4","local-member-id":"e5f6aca4c72f5b22","added-peer-id":"e5f6aca4c72f5b22","added-peer-peer-urls":["https://192.168.39.218:2380"]}
	{"level":"info","ts":"2024-07-29T17:51:58.326493Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2483a61a4a74c1c4","local-member-id":"e5f6aca4c72f5b22","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:51:58.326569Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:51:58.330481Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T17:51:58.33073Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e5f6aca4c72f5b22","initial-advertise-peer-urls":["https://192.168.39.218:2380"],"listen-peer-urls":["https://192.168.39.218:2380"],"advertise-client-urls":["https://192.168.39.218:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.218:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T17:51:58.330845Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T17:51:58.330951Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.218:2380"}
	{"level":"info","ts":"2024-07-29T17:51:58.330979Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.218:2380"}
	{"level":"info","ts":"2024-07-29T17:52:00.207175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T17:52:00.207281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T17:52:00.207335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 received MsgPreVoteResp from e5f6aca4c72f5b22 at term 2"}
	{"level":"info","ts":"2024-07-29T17:52:00.207347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T17:52:00.207352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 received MsgVoteResp from e5f6aca4c72f5b22 at term 3"}
	{"level":"info","ts":"2024-07-29T17:52:00.20736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T17:52:00.20737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5f6aca4c72f5b22 elected leader e5f6aca4c72f5b22 at term 3"}
	{"level":"info","ts":"2024-07-29T17:52:00.212104Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e5f6aca4c72f5b22","local-member-attributes":"{Name:multinode-602258 ClientURLs:[https://192.168.39.218:2379]}","request-path":"/0/members/e5f6aca4c72f5b22/attributes","cluster-id":"2483a61a4a74c1c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T17:52:00.212351Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T17:52:00.212114Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T17:52:00.212845Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T17:52:00.213609Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T17:52:00.214639Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T17:52:00.215096Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.218:2379"}
	{"level":"info","ts":"2024-07-29T17:53:23.610382Z","caller":"traceutil/trace.go:171","msg":"trace[1060692683] transaction","detail":"{read_only:false; response_revision:1125; number_of_response:1; }","duration":"163.159234ms","start":"2024-07-29T17:53:23.447191Z","end":"2024-07-29T17:53:23.61035Z","steps":["trace[1060692683] 'process raft request'  (duration: 162.783226ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:53:40 up 8 min,  0 users,  load average: 0.79, 0.45, 0.21
	Linux multinode-602258 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3426a69c32cd243e491fea11995a1bf631263c68a605d31e6a21b97ce4d0ac4b] <==
	I0729 17:52:54.271315       1 main.go:299] handling current node
	I0729 17:53:04.270440       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:53:04.270500       1 main.go:299] handling current node
	I0729 17:53:04.270515       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:53:04.270520       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:53:04.270696       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0729 17:53:04.270722       1 main.go:322] Node multinode-602258-m03 has CIDR [10.244.3.0/24] 
	I0729 17:53:14.270066       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:53:14.270192       1 main.go:299] handling current node
	I0729 17:53:14.270270       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:53:14.270316       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:53:14.270539       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0729 17:53:14.270585       1 main.go:322] Node multinode-602258-m03 has CIDR [10.244.3.0/24] 
	I0729 17:53:24.269648       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:53:24.269787       1 main.go:299] handling current node
	I0729 17:53:24.269866       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:53:24.269914       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:53:24.270134       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0729 17:53:24.270168       1 main.go:322] Node multinode-602258-m03 has CIDR [10.244.2.0/24] 
	I0729 17:53:34.271340       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:53:34.271436       1 main.go:299] handling current node
	I0729 17:53:34.271464       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:53:34.271481       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:53:34.271662       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0729 17:53:34.271692       1 main.go:322] Node multinode-602258-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [d7615041ffc1aef8098e347b0f7f11240291b7814bf03e59b1e336bc7d7bb7c0] <==
	I0729 17:49:34.280581       1 main.go:322] Node multinode-602258-m03 has CIDR [10.244.3.0/24] 
	I0729 17:49:44.283978       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:49:44.284086       1 main.go:299] handling current node
	I0729 17:49:44.284116       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:49:44.284134       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:49:44.284356       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0729 17:49:44.284390       1 main.go:322] Node multinode-602258-m03 has CIDR [10.244.3.0/24] 
	I0729 17:49:54.288459       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:49:54.288518       1 main.go:299] handling current node
	I0729 17:49:54.288540       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:49:54.288545       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:49:54.288684       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0729 17:49:54.288690       1 main.go:322] Node multinode-602258-m03 has CIDR [10.244.3.0/24] 
	I0729 17:50:04.286785       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:50:04.286964       1 main.go:299] handling current node
	I0729 17:50:04.287027       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:50:04.287050       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:50:04.287216       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0729 17:50:04.287319       1 main.go:322] Node multinode-602258-m03 has CIDR [10.244.3.0/24] 
	I0729 17:50:14.285710       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:50:14.285861       1 main.go:299] handling current node
	I0729 17:50:14.285900       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:50:14.285919       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:50:14.286071       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0729 17:50:14.286092       1 main.go:322] Node multinode-602258-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [07fee3a17c566e898bf4bda366cd3fef0865591a42bdfbf1d81036e598ac14ce] <==
	I0729 17:50:22.066937       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0729 17:50:22.096947       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0729 17:50:22.097679       1 logging.go:59] [core] [Channel #13 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.097833       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.097863       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.097967       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098000       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098047       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098072       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098117       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098150       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098184       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098211       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098285       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098489       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098597       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098658       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0729 17:50:22.098771       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0729 17:50:22.098850       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098909       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098961       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.099008       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.099074       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.099153       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.100376       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [fd3c8ef53e88607de4cee229954a3b64b2c31ca195b03e2e83e6b390b674f06a] <==
	I0729 17:52:01.472282       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0729 17:52:01.519334       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 17:52:01.519426       1 policy_source.go:224] refreshing policies
	I0729 17:52:01.534813       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 17:52:01.546872       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 17:52:01.552873       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 17:52:01.553837       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 17:52:01.553889       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 17:52:01.553896       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 17:52:01.558176       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 17:52:01.558684       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 17:52:01.558763       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 17:52:01.572668       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 17:52:01.572739       1 aggregator.go:165] initial CRD sync complete...
	I0729 17:52:01.572760       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 17:52:01.572765       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 17:52:01.572771       1 cache.go:39] Caches are synced for autoregister controller
	I0729 17:52:02.446785       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 17:52:03.906574       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 17:52:04.053594       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 17:52:04.072003       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 17:52:04.167693       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 17:52:04.177264       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 17:52:14.731964       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 17:52:14.760362       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1f624d4b42189dfe667a9aad521e37765c1b61fa5a1300f05f7a937db2c6a6fa] <==
	I0729 17:46:17.590629       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-602258-m02\" does not exist"
	I0729 17:46:17.671217       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-602258-m02" podCIDRs=["10.244.1.0/24"]
	I0729 17:46:19.592766       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-602258-m02"
	I0729 17:46:36.043726       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:46:38.262402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.355425ms"
	I0729 17:46:38.277775       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.310855ms"
	I0729 17:46:38.277933       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.063µs"
	I0729 17:46:38.281762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.371µs"
	I0729 17:46:39.744791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.769785ms"
	I0729 17:46:39.744869       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.218µs"
	I0729 17:46:39.850428       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.63819ms"
	I0729 17:46:39.850502       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.596µs"
	I0729 17:47:12.025508       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-602258-m03\" does not exist"
	I0729 17:47:12.025754       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:47:12.038996       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-602258-m03" podCIDRs=["10.244.2.0/24"]
	I0729 17:47:14.627810       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-602258-m03"
	I0729 17:47:29.360621       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:47:57.833943       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:47:59.193170       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:47:59.195495       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-602258-m03\" does not exist"
	I0729 17:47:59.204572       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-602258-m03" podCIDRs=["10.244.3.0/24"]
	I0729 17:48:16.684716       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:49:04.690657       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m03"
	I0729 17:49:04.743495       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.131727ms"
	I0729 17:49:04.743733       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.573µs"
	
	
	==> kube-controller-manager [7f75287ad92d6d5f2e5b7da85de7858362a02e33879b2c77184aafed885e2e0d] <==
	I0729 17:52:15.352258       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 17:52:38.105167       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.794428ms"
	I0729 17:52:38.105341       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.969µs"
	I0729 17:52:38.118508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.950735ms"
	I0729 17:52:38.147943       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.38119ms"
	I0729 17:52:38.148032       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.065µs"
	I0729 17:52:40.375476       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.603µs"
	I0729 17:52:42.201771       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-602258-m02\" does not exist"
	I0729 17:52:42.211553       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-602258-m02" podCIDRs=["10.244.1.0/24"]
	I0729 17:52:44.084395       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.681µs"
	I0729 17:52:44.095065       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.634µs"
	I0729 17:52:44.107997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.261µs"
	I0729 17:52:44.140771       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.746µs"
	I0729 17:52:44.148467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.742µs"
	I0729 17:52:44.153124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.031µs"
	I0729 17:53:00.089400       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:53:00.109945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.021µs"
	I0729 17:53:00.130031       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.957µs"
	I0729 17:53:02.501625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.765055ms"
	I0729 17:53:02.502363       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.13µs"
	I0729 17:53:18.312089       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:53:19.369190       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-602258-m03\" does not exist"
	I0729 17:53:19.369301       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:53:19.379564       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-602258-m03" podCIDRs=["10.244.2.0/24"]
	I0729 17:53:37.103664       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m03"
	
	
	==> kube-proxy [864297549b1272800bfebdd28175a349b3ee8ef7c7bbd78c771eaad9e02b25cc] <==
	I0729 17:45:31.626181       1 server_linux.go:69] "Using iptables proxy"
	I0729 17:45:31.692323       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.218"]
	I0729 17:45:31.757481       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 17:45:31.757522       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 17:45:31.757537       1 server_linux.go:165] "Using iptables Proxier"
	I0729 17:45:31.760208       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 17:45:31.760523       1 server.go:872] "Version info" version="v1.30.3"
	I0729 17:45:31.760536       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:45:31.762523       1 config.go:192] "Starting service config controller"
	I0729 17:45:31.762696       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 17:45:31.762726       1 config.go:101] "Starting endpoint slice config controller"
	I0729 17:45:31.762731       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 17:45:31.763651       1 config.go:319] "Starting node config controller"
	I0729 17:45:31.763659       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 17:45:31.864181       1 shared_informer.go:320] Caches are synced for node config
	I0729 17:45:31.864212       1 shared_informer.go:320] Caches are synced for service config
	I0729 17:45:31.864306       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [bf1a3774da5a1b300885fa869b6ee486244da9331c7388294b88d4cc568c1065] <==
	I0729 17:52:03.469289       1 server_linux.go:69] "Using iptables proxy"
	I0729 17:52:03.516071       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.218"]
	I0729 17:52:03.658433       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 17:52:03.658497       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 17:52:03.658515       1 server_linux.go:165] "Using iptables Proxier"
	I0729 17:52:03.665387       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 17:52:03.667578       1 server.go:872] "Version info" version="v1.30.3"
	I0729 17:52:03.667840       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:52:03.669393       1 config.go:192] "Starting service config controller"
	I0729 17:52:03.675359       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 17:52:03.670320       1 config.go:101] "Starting endpoint slice config controller"
	I0729 17:52:03.675487       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 17:52:03.670961       1 config.go:319] "Starting node config controller"
	I0729 17:52:03.675497       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 17:52:03.779653       1 shared_informer.go:320] Caches are synced for node config
	I0729 17:52:03.779685       1 shared_informer.go:320] Caches are synced for service config
	I0729 17:52:03.779726       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [619d14875058f9bdafffdb8f819f0dbada1c276a2b0c1a22286f2a986be363bf] <==
	I0729 17:51:59.296949       1 serving.go:380] Generated self-signed cert in-memory
	W0729 17:52:01.489846       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 17:52:01.490323       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 17:52:01.490379       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 17:52:01.490404       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 17:52:01.535513       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 17:52:01.536072       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:52:01.543056       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 17:52:01.543322       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 17:52:01.545531       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 17:52:01.543464       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 17:52:01.646190       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e82eb1db29cc5f5d61344dd7ed6985093a7f202f4cdac7abf12ba859cab24ac6] <==
	W0729 17:45:14.022069       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 17:45:14.022120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 17:45:14.022184       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 17:45:14.022215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 17:45:14.899560       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 17:45:14.899649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 17:45:14.915272       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 17:45:14.915313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 17:45:15.005943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 17:45:15.006574       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 17:45:15.093493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 17:45:15.093627       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 17:45:15.136203       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 17:45:15.136368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 17:45:15.186504       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 17:45:15.186680       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 17:45:15.273476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 17:45:15.273555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 17:45:15.342447       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 17:45:15.342494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0729 17:45:17.515381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 17:50:22.071352       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 17:50:22.071510       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0729 17:50:22.071949       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0729 17:50:22.081281       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 17:51:58 multinode-602258 kubelet[3097]: E0729 17:51:58.106444    3097 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-602258&limit=500&resourceVersion=0": dial tcp 192.168.39.218:8443: connect: connection refused
	Jul 29 17:51:58 multinode-602258 kubelet[3097]: I0729 17:51:58.766052    3097 kubelet_node_status.go:73] "Attempting to register node" node="multinode-602258"
	Jul 29 17:52:01 multinode-602258 kubelet[3097]: I0729 17:52:01.593574    3097 kubelet_node_status.go:112] "Node was previously registered" node="multinode-602258"
	Jul 29 17:52:01 multinode-602258 kubelet[3097]: I0729 17:52:01.593782    3097 kubelet_node_status.go:76] "Successfully registered node" node="multinode-602258"
	Jul 29 17:52:01 multinode-602258 kubelet[3097]: I0729 17:52:01.596117    3097 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 17:52:01 multinode-602258 kubelet[3097]: I0729 17:52:01.597511    3097 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.241594    3097 apiserver.go:52] "Watching apiserver"
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.246397    3097 topology_manager.go:215] "Topology Admit Handler" podUID="dbbeed00-0740-41dc-b9f2-aa03336074ac" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b7fmn"
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.246661    3097 topology_manager.go:215] "Topology Admit Handler" podUID="700c5f4f-8bac-4a69-8174-0b8a80c4e831" podNamespace="kube-system" podName="kindnet-68dnv"
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.246810    3097 topology_manager.go:215] "Topology Admit Handler" podUID="8951fee7-e31c-401a-8688-79487ea5fc64" podNamespace="kube-system" podName="kube-proxy-shhsx"
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.246913    3097 topology_manager.go:215] "Topology Admit Handler" podUID="dee56b25-3f87-483c-8fda-95989162e3ba" podNamespace="kube-system" podName="storage-provisioner"
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.247020    3097 topology_manager.go:215] "Topology Admit Handler" podUID="1c31cd36-a917-4a07-a18f-887c7defa6e2" podNamespace="default" podName="busybox-fc5497c4f-kqrzf"
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.255766    3097 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.321526    3097 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/700c5f4f-8bac-4a69-8174-0b8a80c4e831-lib-modules\") pod \"kindnet-68dnv\" (UID: \"700c5f4f-8bac-4a69-8174-0b8a80c4e831\") " pod="kube-system/kindnet-68dnv"
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.321630    3097 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dee56b25-3f87-483c-8fda-95989162e3ba-tmp\") pod \"storage-provisioner\" (UID: \"dee56b25-3f87-483c-8fda-95989162e3ba\") " pod="kube-system/storage-provisioner"
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.321813    3097 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/700c5f4f-8bac-4a69-8174-0b8a80c4e831-cni-cfg\") pod \"kindnet-68dnv\" (UID: \"700c5f4f-8bac-4a69-8174-0b8a80c4e831\") " pod="kube-system/kindnet-68dnv"
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.321847    3097 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/700c5f4f-8bac-4a69-8174-0b8a80c4e831-xtables-lock\") pod \"kindnet-68dnv\" (UID: \"700c5f4f-8bac-4a69-8174-0b8a80c4e831\") " pod="kube-system/kindnet-68dnv"
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.321928    3097 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8951fee7-e31c-401a-8688-79487ea5fc64-xtables-lock\") pod \"kube-proxy-shhsx\" (UID: \"8951fee7-e31c-401a-8688-79487ea5fc64\") " pod="kube-system/kube-proxy-shhsx"
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.322019    3097 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8951fee7-e31c-401a-8688-79487ea5fc64-lib-modules\") pod \"kube-proxy-shhsx\" (UID: \"8951fee7-e31c-401a-8688-79487ea5fc64\") " pod="kube-system/kube-proxy-shhsx"
	Jul 29 17:52:04 multinode-602258 kubelet[3097]: I0729 17:52:04.629552    3097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 29 17:52:57 multinode-602258 kubelet[3097]: E0729 17:52:57.307914    3097 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:52:57 multinode-602258 kubelet[3097]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:52:57 multinode-602258 kubelet[3097]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:52:57 multinode-602258 kubelet[3097]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:52:57 multinode-602258 kubelet[3097]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 17:53:39.660389   49324 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19345-11206/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-602258 -n multinode-602258
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-602258 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (322.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-602258 stop: exit status 82 (2m0.462883162s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-602258-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-602258 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-602258 status: exit status 3 (18.793572182s)

                                                
                                                
-- stdout --
	multinode-602258
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-602258-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 17:56:02.930645   49987 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host
	E0729 17:56:02.930678   49987 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-602258 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-602258 -n multinode-602258
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-602258 logs -n 25: (1.463797486s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-602258 ssh -n                                                                 | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-602258 cp multinode-602258-m02:/home/docker/cp-test.txt                       | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258:/home/docker/cp-test_multinode-602258-m02_multinode-602258.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n                                                                 | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n multinode-602258 sudo cat                                       | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-602258-m02_multinode-602258.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-602258 cp multinode-602258-m02:/home/docker/cp-test.txt                       | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m03:/home/docker/cp-test_multinode-602258-m02_multinode-602258-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n                                                                 | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n multinode-602258-m03 sudo cat                                   | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-602258-m02_multinode-602258-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-602258 cp testdata/cp-test.txt                                                | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n                                                                 | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-602258 cp multinode-602258-m03:/home/docker/cp-test.txt                       | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile669002766/001/cp-test_multinode-602258-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n                                                                 | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-602258 cp multinode-602258-m03:/home/docker/cp-test.txt                       | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258:/home/docker/cp-test_multinode-602258-m03_multinode-602258.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n                                                                 | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n multinode-602258 sudo cat                                       | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-602258-m03_multinode-602258.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-602258 cp multinode-602258-m03:/home/docker/cp-test.txt                       | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m02:/home/docker/cp-test_multinode-602258-m03_multinode-602258-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n                                                                 | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | multinode-602258-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-602258 ssh -n multinode-602258-m02 sudo cat                                   | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-602258-m03_multinode-602258-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-602258 node stop m03                                                          | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:47 UTC |
	| node    | multinode-602258 node start                                                             | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:47 UTC | 29 Jul 24 17:48 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-602258                                                                | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:48 UTC |                     |
	| stop    | -p multinode-602258                                                                     | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:48 UTC |                     |
	| start   | -p multinode-602258                                                                     | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:50 UTC | 29 Jul 24 17:53 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-602258                                                                | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC |                     |
	| node    | multinode-602258 node delete                                                            | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-602258 stop                                                                   | multinode-602258 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 17:50:21
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 17:50:21.239950   48198 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:50:21.240217   48198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:50:21.240226   48198 out.go:304] Setting ErrFile to fd 2...
	I0729 17:50:21.240230   48198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:50:21.240406   48198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:50:21.240957   48198 out.go:298] Setting JSON to false
	I0729 17:50:21.241946   48198 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5573,"bootTime":1722269848,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:50:21.242005   48198 start.go:139] virtualization: kvm guest
	I0729 17:50:21.244304   48198 out.go:177] * [multinode-602258] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 17:50:21.245656   48198 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 17:50:21.245695   48198 notify.go:220] Checking for updates...
	I0729 17:50:21.247758   48198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:50:21.248957   48198 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 17:50:21.250089   48198 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:50:21.251303   48198 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 17:50:21.252390   48198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:50:21.253873   48198 config.go:182] Loaded profile config "multinode-602258": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:50:21.253963   48198 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:50:21.254380   48198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:21.254439   48198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:21.269819   48198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45475
	I0729 17:50:21.270306   48198 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:21.270842   48198 main.go:141] libmachine: Using API Version  1
	I0729 17:50:21.270857   48198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:21.271233   48198 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:21.271402   48198 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:50:21.306924   48198 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 17:50:21.308157   48198 start.go:297] selected driver: kvm2
	I0729 17:50:21.308176   48198 start.go:901] validating driver "kvm2" against &{Name:multinode-602258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-602258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:50:21.308345   48198 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:50:21.308686   48198 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:50:21.308773   48198 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 17:50:21.323725   48198 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 17:50:21.324453   48198 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:50:21.324516   48198 cni.go:84] Creating CNI manager for ""
	I0729 17:50:21.324536   48198 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 17:50:21.324606   48198 start.go:340] cluster config:
	{Name:multinode-602258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-602258 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:50:21.324751   48198 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:50:21.326654   48198 out.go:177] * Starting "multinode-602258" primary control-plane node in "multinode-602258" cluster
	I0729 17:50:21.327834   48198 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:50:21.327867   48198 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 17:50:21.327879   48198 cache.go:56] Caching tarball of preloaded images
	I0729 17:50:21.327962   48198 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:50:21.327974   48198 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:50:21.328132   48198 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/config.json ...
	I0729 17:50:21.328362   48198 start.go:360] acquireMachinesLock for multinode-602258: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:50:21.328424   48198 start.go:364] duration metric: took 42.22µs to acquireMachinesLock for "multinode-602258"
	I0729 17:50:21.328446   48198 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:50:21.328457   48198 fix.go:54] fixHost starting: 
	I0729 17:50:21.328702   48198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:21.328737   48198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:21.343331   48198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I0729 17:50:21.343801   48198 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:21.344242   48198 main.go:141] libmachine: Using API Version  1
	I0729 17:50:21.344263   48198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:21.344601   48198 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:21.344818   48198 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:50:21.344994   48198 main.go:141] libmachine: (multinode-602258) Calling .GetState
	I0729 17:50:21.346803   48198 fix.go:112] recreateIfNeeded on multinode-602258: state=Running err=<nil>
	W0729 17:50:21.346822   48198 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:50:21.348828   48198 out.go:177] * Updating the running kvm2 "multinode-602258" VM ...
	I0729 17:50:21.350021   48198 machine.go:94] provisionDockerMachine start ...
	I0729 17:50:21.350047   48198 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:50:21.350272   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:50:21.352838   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.353377   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:50:21.353404   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.353593   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:50:21.353793   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.353963   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.354114   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:50:21.354275   48198 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:21.354472   48198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0729 17:50:21.354485   48198 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 17:50:21.464516   48198 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-602258
	
	I0729 17:50:21.464542   48198 main.go:141] libmachine: (multinode-602258) Calling .GetMachineName
	I0729 17:50:21.464830   48198 buildroot.go:166] provisioning hostname "multinode-602258"
	I0729 17:50:21.464861   48198 main.go:141] libmachine: (multinode-602258) Calling .GetMachineName
	I0729 17:50:21.465073   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:50:21.467735   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.468102   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:50:21.468146   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.468240   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:50:21.468404   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.468546   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.468678   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:50:21.468848   48198 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:21.469011   48198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0729 17:50:21.469023   48198 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-602258 && echo "multinode-602258" | sudo tee /etc/hostname
	I0729 17:50:21.592791   48198 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-602258
	
	I0729 17:50:21.592814   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:50:21.595934   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.596356   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:50:21.596386   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.596544   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:50:21.596717   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.596870   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.596998   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:50:21.597116   48198 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:21.597323   48198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0729 17:50:21.597340   48198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-602258' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-602258/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-602258' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:50:21.703496   48198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:50:21.703536   48198 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 17:50:21.703581   48198 buildroot.go:174] setting up certificates
	I0729 17:50:21.703592   48198 provision.go:84] configureAuth start
	I0729 17:50:21.703610   48198 main.go:141] libmachine: (multinode-602258) Calling .GetMachineName
	I0729 17:50:21.703918   48198 main.go:141] libmachine: (multinode-602258) Calling .GetIP
	I0729 17:50:21.706519   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.706826   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:50:21.706869   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.707021   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:50:21.709179   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.709609   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:50:21.709648   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.709716   48198 provision.go:143] copyHostCerts
	I0729 17:50:21.709745   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:50:21.709782   48198 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 17:50:21.709799   48198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 17:50:21.709877   48198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 17:50:21.709968   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:50:21.709993   48198 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 17:50:21.710000   48198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 17:50:21.710041   48198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 17:50:21.710109   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:50:21.710132   48198 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 17:50:21.710138   48198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 17:50:21.710177   48198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 17:50:21.710242   48198 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.multinode-602258 san=[127.0.0.1 192.168.39.218 localhost minikube multinode-602258]
	I0729 17:50:21.786991   48198 provision.go:177] copyRemoteCerts
	I0729 17:50:21.787066   48198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:50:21.787102   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:50:21.789741   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.790085   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:50:21.790110   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.790337   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:50:21.790507   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.790661   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:50:21.790818   48198 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/multinode-602258/id_rsa Username:docker}
	I0729 17:50:21.873710   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:50:21.873812   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 17:50:21.900693   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:50:21.900771   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 17:50:21.926847   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:50:21.926919   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:50:21.951119   48198 provision.go:87] duration metric: took 247.510328ms to configureAuth
	I0729 17:50:21.951156   48198 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:50:21.951441   48198 config.go:182] Loaded profile config "multinode-602258": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:50:21.951514   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:50:21.954053   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.954460   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:50:21.954480   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:50:21.954684   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:50:21.954849   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.955036   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:50:21.955226   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:50:21.955365   48198 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:21.955536   48198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0729 17:50:21.955550   48198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:51:52.854508   48198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:51:52.854533   48198 machine.go:97] duration metric: took 1m31.504498331s to provisionDockerMachine
	I0729 17:51:52.854546   48198 start.go:293] postStartSetup for "multinode-602258" (driver="kvm2")
	I0729 17:51:52.854556   48198 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:51:52.854571   48198 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:51:52.854939   48198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:51:52.854981   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:51:52.857904   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:52.858296   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:51:52.858316   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:52.858486   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:51:52.858668   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:51:52.858844   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:51:52.858989   48198 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/multinode-602258/id_rsa Username:docker}
	I0729 17:51:52.942387   48198 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:51:52.946491   48198 command_runner.go:130] > NAME=Buildroot
	I0729 17:51:52.946509   48198 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 17:51:52.946513   48198 command_runner.go:130] > ID=buildroot
	I0729 17:51:52.946518   48198 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 17:51:52.946523   48198 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 17:51:52.946582   48198 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:51:52.946606   48198 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 17:51:52.946669   48198 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 17:51:52.946743   48198 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 17:51:52.946756   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /etc/ssl/certs/183932.pem
	I0729 17:51:52.946837   48198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:51:52.956704   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:51:52.981025   48198 start.go:296] duration metric: took 126.466099ms for postStartSetup
	I0729 17:51:52.981108   48198 fix.go:56] duration metric: took 1m31.652649258s for fixHost
	I0729 17:51:52.981133   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:51:52.983919   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:52.984269   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:51:52.984290   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:52.984452   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:51:52.984652   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:51:52.984811   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:51:52.984956   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:51:52.985105   48198 main.go:141] libmachine: Using SSH client type: native
	I0729 17:51:52.985267   48198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0729 17:51:52.985276   48198 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:51:53.091125   48198 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722275513.058323927
	
	I0729 17:51:53.091161   48198 fix.go:216] guest clock: 1722275513.058323927
	I0729 17:51:53.091171   48198 fix.go:229] Guest: 2024-07-29 17:51:53.058323927 +0000 UTC Remote: 2024-07-29 17:51:52.981114501 +0000 UTC m=+91.775679826 (delta=77.209426ms)
	I0729 17:51:53.091232   48198 fix.go:200] guest clock delta is within tolerance: 77.209426ms
	I0729 17:51:53.091240   48198 start.go:83] releasing machines lock for "multinode-602258", held for 1m31.762805153s
	I0729 17:51:53.091271   48198 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:51:53.091545   48198 main.go:141] libmachine: (multinode-602258) Calling .GetIP
	I0729 17:51:53.094060   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:53.094385   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:51:53.094413   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:53.094556   48198 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:51:53.095022   48198 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:51:53.095211   48198 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:51:53.095308   48198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:51:53.095355   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:51:53.095415   48198 ssh_runner.go:195] Run: cat /version.json
	I0729 17:51:53.095455   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:51:53.097788   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:53.097859   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:53.098131   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:51:53.098157   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:53.098288   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:51:53.098301   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:51:53.098325   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:53.098492   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:51:53.098496   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:51:53.098665   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:51:53.098679   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:51:53.098913   48198 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/multinode-602258/id_rsa Username:docker}
	I0729 17:51:53.098954   48198 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:51:53.099091   48198 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/multinode-602258/id_rsa Username:docker}
	I0729 17:51:53.175247   48198 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0729 17:51:53.175412   48198 ssh_runner.go:195] Run: systemctl --version
	I0729 17:51:53.198086   48198 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 17:51:53.198754   48198 command_runner.go:130] > systemd 252 (252)
	I0729 17:51:53.198784   48198 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 17:51:53.198845   48198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:51:53.360905   48198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 17:51:53.367180   48198 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 17:51:53.367259   48198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:51:53.367311   48198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:51:53.377377   48198 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 17:51:53.377400   48198 start.go:495] detecting cgroup driver to use...
	I0729 17:51:53.377463   48198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:51:53.394066   48198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:51:53.408358   48198 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:51:53.408406   48198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:51:53.422420   48198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:51:53.436355   48198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:51:53.579125   48198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:51:53.720116   48198 docker.go:233] disabling docker service ...
	I0729 17:51:53.720187   48198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:51:53.736266   48198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:51:53.750400   48198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:51:53.903881   48198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:51:54.058999   48198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:51:54.073866   48198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:51:54.092538   48198 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 17:51:54.092742   48198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:51:54.092819   48198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:54.103816   48198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:51:54.103886   48198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:54.115311   48198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:54.126778   48198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:54.137679   48198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:51:54.148631   48198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:54.159319   48198 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:54.169940   48198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:54.181067   48198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:51:54.190813   48198 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 17:51:54.190883   48198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:51:54.200584   48198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:51:54.349253   48198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:51:54.591887   48198 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:51:54.591975   48198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:51:54.596664   48198 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 17:51:54.596691   48198 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 17:51:54.596701   48198 command_runner.go:130] > Device: 0,22	Inode: 1328        Links: 1
	I0729 17:51:54.596712   48198 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 17:51:54.596720   48198 command_runner.go:130] > Access: 2024-07-29 17:51:54.450743080 +0000
	I0729 17:51:54.596725   48198 command_runner.go:130] > Modify: 2024-07-29 17:51:54.450743080 +0000
	I0729 17:51:54.596731   48198 command_runner.go:130] > Change: 2024-07-29 17:51:54.450743080 +0000
	I0729 17:51:54.596737   48198 command_runner.go:130] >  Birth: -
	I0729 17:51:54.596806   48198 start.go:563] Will wait 60s for crictl version
	I0729 17:51:54.596862   48198 ssh_runner.go:195] Run: which crictl
	I0729 17:51:54.600640   48198 command_runner.go:130] > /usr/bin/crictl
	I0729 17:51:54.600699   48198 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:51:54.641561   48198 command_runner.go:130] > Version:  0.1.0
	I0729 17:51:54.641586   48198 command_runner.go:130] > RuntimeName:  cri-o
	I0729 17:51:54.641591   48198 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 17:51:54.641596   48198 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 17:51:54.641613   48198 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:51:54.641665   48198 ssh_runner.go:195] Run: crio --version
	I0729 17:51:54.669212   48198 command_runner.go:130] > crio version 1.29.1
	I0729 17:51:54.669232   48198 command_runner.go:130] > Version:        1.29.1
	I0729 17:51:54.669237   48198 command_runner.go:130] > GitCommit:      unknown
	I0729 17:51:54.669242   48198 command_runner.go:130] > GitCommitDate:  unknown
	I0729 17:51:54.669264   48198 command_runner.go:130] > GitTreeState:   clean
	I0729 17:51:54.669269   48198 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 17:51:54.669274   48198 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 17:51:54.669277   48198 command_runner.go:130] > Compiler:       gc
	I0729 17:51:54.669281   48198 command_runner.go:130] > Platform:       linux/amd64
	I0729 17:51:54.669285   48198 command_runner.go:130] > Linkmode:       dynamic
	I0729 17:51:54.669289   48198 command_runner.go:130] > BuildTags:      
	I0729 17:51:54.669303   48198 command_runner.go:130] >   containers_image_ostree_stub
	I0729 17:51:54.669309   48198 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 17:51:54.669312   48198 command_runner.go:130] >   btrfs_noversion
	I0729 17:51:54.669316   48198 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 17:51:54.669321   48198 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 17:51:54.669324   48198 command_runner.go:130] >   seccomp
	I0729 17:51:54.669329   48198 command_runner.go:130] > LDFlags:          unknown
	I0729 17:51:54.669336   48198 command_runner.go:130] > SeccompEnabled:   true
	I0729 17:51:54.669339   48198 command_runner.go:130] > AppArmorEnabled:  false
	I0729 17:51:54.670610   48198 ssh_runner.go:195] Run: crio --version
	I0729 17:51:54.699198   48198 command_runner.go:130] > crio version 1.29.1
	I0729 17:51:54.699217   48198 command_runner.go:130] > Version:        1.29.1
	I0729 17:51:54.699223   48198 command_runner.go:130] > GitCommit:      unknown
	I0729 17:51:54.699228   48198 command_runner.go:130] > GitCommitDate:  unknown
	I0729 17:51:54.699231   48198 command_runner.go:130] > GitTreeState:   clean
	I0729 17:51:54.699236   48198 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 17:51:54.699240   48198 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 17:51:54.699250   48198 command_runner.go:130] > Compiler:       gc
	I0729 17:51:54.699255   48198 command_runner.go:130] > Platform:       linux/amd64
	I0729 17:51:54.699259   48198 command_runner.go:130] > Linkmode:       dynamic
	I0729 17:51:54.699265   48198 command_runner.go:130] > BuildTags:      
	I0729 17:51:54.699270   48198 command_runner.go:130] >   containers_image_ostree_stub
	I0729 17:51:54.699274   48198 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 17:51:54.699278   48198 command_runner.go:130] >   btrfs_noversion
	I0729 17:51:54.699283   48198 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 17:51:54.699290   48198 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 17:51:54.699293   48198 command_runner.go:130] >   seccomp
	I0729 17:51:54.699298   48198 command_runner.go:130] > LDFlags:          unknown
	I0729 17:51:54.699302   48198 command_runner.go:130] > SeccompEnabled:   true
	I0729 17:51:54.699306   48198 command_runner.go:130] > AppArmorEnabled:  false
	I0729 17:51:54.701243   48198 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:51:54.702677   48198 main.go:141] libmachine: (multinode-602258) Calling .GetIP
	I0729 17:51:54.705111   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:54.705497   48198 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:51:54.705524   48198 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:51:54.705722   48198 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:51:54.710647   48198 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 17:51:54.710810   48198 kubeadm.go:883] updating cluster {Name:multinode-602258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-602258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 17:51:54.710970   48198 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:51:54.711028   48198 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:51:54.755611   48198 command_runner.go:130] > {
	I0729 17:51:54.755634   48198 command_runner.go:130] >   "images": [
	I0729 17:51:54.755638   48198 command_runner.go:130] >     {
	I0729 17:51:54.755646   48198 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 17:51:54.755650   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.755656   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 17:51:54.755659   48198 command_runner.go:130] >       ],
	I0729 17:51:54.755663   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.755673   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 17:51:54.755684   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 17:51:54.755691   48198 command_runner.go:130] >       ],
	I0729 17:51:54.755698   48198 command_runner.go:130] >       "size": "87165492",
	I0729 17:51:54.755704   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.755712   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.755724   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.755734   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.755738   48198 command_runner.go:130] >     },
	I0729 17:51:54.755741   48198 command_runner.go:130] >     {
	I0729 17:51:54.755747   48198 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 17:51:54.755754   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.755760   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 17:51:54.755769   48198 command_runner.go:130] >       ],
	I0729 17:51:54.755777   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.755798   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 17:51:54.755812   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 17:51:54.755818   48198 command_runner.go:130] >       ],
	I0729 17:51:54.755827   48198 command_runner.go:130] >       "size": "87174707",
	I0729 17:51:54.755835   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.755852   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.755862   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.755869   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.755877   48198 command_runner.go:130] >     },
	I0729 17:51:54.755885   48198 command_runner.go:130] >     {
	I0729 17:51:54.755898   48198 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 17:51:54.755908   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.755918   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 17:51:54.755926   48198 command_runner.go:130] >       ],
	I0729 17:51:54.755932   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.755942   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 17:51:54.755957   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 17:51:54.755966   48198 command_runner.go:130] >       ],
	I0729 17:51:54.755976   48198 command_runner.go:130] >       "size": "1363676",
	I0729 17:51:54.755984   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.755994   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.756003   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.756011   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.756017   48198 command_runner.go:130] >     },
	I0729 17:51:54.756021   48198 command_runner.go:130] >     {
	I0729 17:51:54.756034   48198 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 17:51:54.756044   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.756055   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 17:51:54.756063   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756073   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.756093   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 17:51:54.756112   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 17:51:54.756120   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756127   48198 command_runner.go:130] >       "size": "31470524",
	I0729 17:51:54.756136   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.756146   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.756164   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.756173   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.756181   48198 command_runner.go:130] >     },
	I0729 17:51:54.756186   48198 command_runner.go:130] >     {
	I0729 17:51:54.756193   48198 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 17:51:54.756201   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.756213   48198 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 17:51:54.756222   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756231   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.756246   48198 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 17:51:54.756261   48198 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 17:51:54.756269   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756274   48198 command_runner.go:130] >       "size": "61245718",
	I0729 17:51:54.756278   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.756285   48198 command_runner.go:130] >       "username": "nonroot",
	I0729 17:51:54.756295   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.756303   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.756309   48198 command_runner.go:130] >     },
	I0729 17:51:54.756317   48198 command_runner.go:130] >     {
	I0729 17:51:54.756327   48198 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 17:51:54.756336   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.756346   48198 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 17:51:54.756353   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756358   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.756370   48198 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 17:51:54.756383   48198 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 17:51:54.756392   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756402   48198 command_runner.go:130] >       "size": "150779692",
	I0729 17:51:54.756409   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.756416   48198 command_runner.go:130] >         "value": "0"
	I0729 17:51:54.756425   48198 command_runner.go:130] >       },
	I0729 17:51:54.756435   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.756443   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.756449   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.756452   48198 command_runner.go:130] >     },
	I0729 17:51:54.756460   48198 command_runner.go:130] >     {
	I0729 17:51:54.756479   48198 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 17:51:54.756489   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.756500   48198 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 17:51:54.756508   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756517   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.756531   48198 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 17:51:54.756545   48198 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 17:51:54.756555   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756560   48198 command_runner.go:130] >       "size": "117609954",
	I0729 17:51:54.756565   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.756570   48198 command_runner.go:130] >         "value": "0"
	I0729 17:51:54.756575   48198 command_runner.go:130] >       },
	I0729 17:51:54.756581   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.756587   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.756594   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.756599   48198 command_runner.go:130] >     },
	I0729 17:51:54.756604   48198 command_runner.go:130] >     {
	I0729 17:51:54.756615   48198 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 17:51:54.756621   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.756630   48198 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 17:51:54.756635   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756646   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.756676   48198 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 17:51:54.756693   48198 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 17:51:54.756706   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756713   48198 command_runner.go:130] >       "size": "112198984",
	I0729 17:51:54.756719   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.756727   48198 command_runner.go:130] >         "value": "0"
	I0729 17:51:54.756733   48198 command_runner.go:130] >       },
	I0729 17:51:54.756820   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.756859   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.756869   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.756874   48198 command_runner.go:130] >     },
	I0729 17:51:54.756883   48198 command_runner.go:130] >     {
	I0729 17:51:54.756892   48198 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 17:51:54.756901   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.756926   48198 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 17:51:54.756936   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756943   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.756958   48198 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 17:51:54.756972   48198 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 17:51:54.756979   48198 command_runner.go:130] >       ],
	I0729 17:51:54.756985   48198 command_runner.go:130] >       "size": "85953945",
	I0729 17:51:54.756992   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.756997   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.757001   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.757006   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.757015   48198 command_runner.go:130] >     },
	I0729 17:51:54.757018   48198 command_runner.go:130] >     {
	I0729 17:51:54.757025   48198 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 17:51:54.757031   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.757035   48198 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 17:51:54.757039   48198 command_runner.go:130] >       ],
	I0729 17:51:54.757043   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.757053   48198 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 17:51:54.757062   48198 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 17:51:54.757065   48198 command_runner.go:130] >       ],
	I0729 17:51:54.757069   48198 command_runner.go:130] >       "size": "63051080",
	I0729 17:51:54.757075   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.757079   48198 command_runner.go:130] >         "value": "0"
	I0729 17:51:54.757083   48198 command_runner.go:130] >       },
	I0729 17:51:54.757094   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.757101   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.757105   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.757111   48198 command_runner.go:130] >     },
	I0729 17:51:54.757114   48198 command_runner.go:130] >     {
	I0729 17:51:54.757120   48198 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 17:51:54.757126   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.757130   48198 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 17:51:54.757136   48198 command_runner.go:130] >       ],
	I0729 17:51:54.757140   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.757146   48198 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 17:51:54.757161   48198 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 17:51:54.757166   48198 command_runner.go:130] >       ],
	I0729 17:51:54.757170   48198 command_runner.go:130] >       "size": "750414",
	I0729 17:51:54.757174   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.757178   48198 command_runner.go:130] >         "value": "65535"
	I0729 17:51:54.757182   48198 command_runner.go:130] >       },
	I0729 17:51:54.757187   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.757191   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.757197   48198 command_runner.go:130] >       "pinned": true
	I0729 17:51:54.757200   48198 command_runner.go:130] >     }
	I0729 17:51:54.757203   48198 command_runner.go:130] >   ]
	I0729 17:51:54.757208   48198 command_runner.go:130] > }
	I0729 17:51:54.757400   48198 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 17:51:54.757411   48198 crio.go:433] Images already preloaded, skipping extraction
	I0729 17:51:54.757464   48198 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:51:54.794756   48198 command_runner.go:130] > {
	I0729 17:51:54.794777   48198 command_runner.go:130] >   "images": [
	I0729 17:51:54.794780   48198 command_runner.go:130] >     {
	I0729 17:51:54.794788   48198 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 17:51:54.794793   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.794798   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 17:51:54.794801   48198 command_runner.go:130] >       ],
	I0729 17:51:54.794808   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.794816   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 17:51:54.794823   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 17:51:54.794826   48198 command_runner.go:130] >       ],
	I0729 17:51:54.794830   48198 command_runner.go:130] >       "size": "87165492",
	I0729 17:51:54.794834   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.794837   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.794851   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.794858   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.794863   48198 command_runner.go:130] >     },
	I0729 17:51:54.794866   48198 command_runner.go:130] >     {
	I0729 17:51:54.794871   48198 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 17:51:54.794876   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.794881   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 17:51:54.794884   48198 command_runner.go:130] >       ],
	I0729 17:51:54.794888   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.794895   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 17:51:54.794903   48198 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 17:51:54.794906   48198 command_runner.go:130] >       ],
	I0729 17:51:54.794912   48198 command_runner.go:130] >       "size": "87174707",
	I0729 17:51:54.794916   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.794923   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.794930   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.794933   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.794943   48198 command_runner.go:130] >     },
	I0729 17:51:54.794950   48198 command_runner.go:130] >     {
	I0729 17:51:54.794956   48198 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 17:51:54.794960   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.794965   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 17:51:54.794968   48198 command_runner.go:130] >       ],
	I0729 17:51:54.794972   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.794980   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 17:51:54.794987   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 17:51:54.794991   48198 command_runner.go:130] >       ],
	I0729 17:51:54.794995   48198 command_runner.go:130] >       "size": "1363676",
	I0729 17:51:54.794999   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.795003   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.795007   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795011   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.795015   48198 command_runner.go:130] >     },
	I0729 17:51:54.795018   48198 command_runner.go:130] >     {
	I0729 17:51:54.795024   48198 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 17:51:54.795029   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.795034   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 17:51:54.795037   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795041   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.795051   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 17:51:54.795070   48198 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 17:51:54.795076   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795079   48198 command_runner.go:130] >       "size": "31470524",
	I0729 17:51:54.795083   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.795087   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.795092   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795097   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.795102   48198 command_runner.go:130] >     },
	I0729 17:51:54.795105   48198 command_runner.go:130] >     {
	I0729 17:51:54.795111   48198 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 17:51:54.795118   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.795122   48198 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 17:51:54.795126   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795135   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.795145   48198 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 17:51:54.795152   48198 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 17:51:54.795156   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795161   48198 command_runner.go:130] >       "size": "61245718",
	I0729 17:51:54.795166   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.795173   48198 command_runner.go:130] >       "username": "nonroot",
	I0729 17:51:54.795176   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795181   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.795186   48198 command_runner.go:130] >     },
	I0729 17:51:54.795190   48198 command_runner.go:130] >     {
	I0729 17:51:54.795195   48198 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 17:51:54.795202   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.795206   48198 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 17:51:54.795210   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795214   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.795223   48198 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 17:51:54.795233   48198 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 17:51:54.795237   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795241   48198 command_runner.go:130] >       "size": "150779692",
	I0729 17:51:54.795245   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.795250   48198 command_runner.go:130] >         "value": "0"
	I0729 17:51:54.795255   48198 command_runner.go:130] >       },
	I0729 17:51:54.795259   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.795263   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795267   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.795273   48198 command_runner.go:130] >     },
	I0729 17:51:54.795276   48198 command_runner.go:130] >     {
	I0729 17:51:54.795282   48198 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 17:51:54.795288   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.795293   48198 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 17:51:54.795299   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795302   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.795312   48198 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 17:51:54.795321   48198 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 17:51:54.795324   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795332   48198 command_runner.go:130] >       "size": "117609954",
	I0729 17:51:54.795338   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.795342   48198 command_runner.go:130] >         "value": "0"
	I0729 17:51:54.795345   48198 command_runner.go:130] >       },
	I0729 17:51:54.795351   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.795355   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795361   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.795365   48198 command_runner.go:130] >     },
	I0729 17:51:54.795372   48198 command_runner.go:130] >     {
	I0729 17:51:54.795380   48198 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 17:51:54.795386   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.795392   48198 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 17:51:54.795398   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795402   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.795424   48198 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 17:51:54.795433   48198 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 17:51:54.795439   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795443   48198 command_runner.go:130] >       "size": "112198984",
	I0729 17:51:54.795447   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.795453   48198 command_runner.go:130] >         "value": "0"
	I0729 17:51:54.795457   48198 command_runner.go:130] >       },
	I0729 17:51:54.795463   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.795467   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795473   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.795476   48198 command_runner.go:130] >     },
	I0729 17:51:54.795482   48198 command_runner.go:130] >     {
	I0729 17:51:54.795488   48198 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 17:51:54.795494   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.795499   48198 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 17:51:54.795504   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795508   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.795517   48198 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 17:51:54.795526   48198 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 17:51:54.795531   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795535   48198 command_runner.go:130] >       "size": "85953945",
	I0729 17:51:54.795539   48198 command_runner.go:130] >       "uid": null,
	I0729 17:51:54.795549   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.795556   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795560   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.795565   48198 command_runner.go:130] >     },
	I0729 17:51:54.795568   48198 command_runner.go:130] >     {
	I0729 17:51:54.795576   48198 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 17:51:54.795580   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.795587   48198 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 17:51:54.795601   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795607   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.795614   48198 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 17:51:54.795624   48198 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 17:51:54.795630   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795634   48198 command_runner.go:130] >       "size": "63051080",
	I0729 17:51:54.795638   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.795643   48198 command_runner.go:130] >         "value": "0"
	I0729 17:51:54.795647   48198 command_runner.go:130] >       },
	I0729 17:51:54.795653   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.795656   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795660   48198 command_runner.go:130] >       "pinned": false
	I0729 17:51:54.795663   48198 command_runner.go:130] >     },
	I0729 17:51:54.795667   48198 command_runner.go:130] >     {
	I0729 17:51:54.795673   48198 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 17:51:54.795679   48198 command_runner.go:130] >       "repoTags": [
	I0729 17:51:54.795684   48198 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 17:51:54.795689   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795693   48198 command_runner.go:130] >       "repoDigests": [
	I0729 17:51:54.795700   48198 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 17:51:54.795708   48198 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 17:51:54.795712   48198 command_runner.go:130] >       ],
	I0729 17:51:54.795718   48198 command_runner.go:130] >       "size": "750414",
	I0729 17:51:54.795722   48198 command_runner.go:130] >       "uid": {
	I0729 17:51:54.795726   48198 command_runner.go:130] >         "value": "65535"
	I0729 17:51:54.795729   48198 command_runner.go:130] >       },
	I0729 17:51:54.795733   48198 command_runner.go:130] >       "username": "",
	I0729 17:51:54.795737   48198 command_runner.go:130] >       "spec": null,
	I0729 17:51:54.795746   48198 command_runner.go:130] >       "pinned": true
	I0729 17:51:54.795752   48198 command_runner.go:130] >     }
	I0729 17:51:54.795755   48198 command_runner.go:130] >   ]
	I0729 17:51:54.795760   48198 command_runner.go:130] > }
	I0729 17:51:54.796204   48198 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 17:51:54.796224   48198 cache_images.go:84] Images are preloaded, skipping loading
	I0729 17:51:54.796237   48198 kubeadm.go:934] updating node { 192.168.39.218 8443 v1.30.3 crio true true} ...
	I0729 17:51:54.796333   48198 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-602258 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-602258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:51:54.796390   48198 ssh_runner.go:195] Run: crio config
	I0729 17:51:54.834317   48198 command_runner.go:130] ! time="2024-07-29 17:51:54.801427832Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 17:51:54.840032   48198 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 17:51:54.852565   48198 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 17:51:54.852593   48198 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 17:51:54.852608   48198 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 17:51:54.852613   48198 command_runner.go:130] > #
	I0729 17:51:54.852624   48198 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 17:51:54.852633   48198 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 17:51:54.852643   48198 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 17:51:54.852659   48198 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 17:51:54.852668   48198 command_runner.go:130] > # reload'.
	I0729 17:51:54.852681   48198 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 17:51:54.852692   48198 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 17:51:54.852701   48198 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 17:51:54.852709   48198 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 17:51:54.852715   48198 command_runner.go:130] > [crio]
	I0729 17:51:54.852721   48198 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 17:51:54.852728   48198 command_runner.go:130] > # containers images, in this directory.
	I0729 17:51:54.852732   48198 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 17:51:54.852744   48198 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 17:51:54.852756   48198 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 17:51:54.852767   48198 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 17:51:54.852776   48198 command_runner.go:130] > # imagestore = ""
	I0729 17:51:54.852788   48198 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 17:51:54.852800   48198 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 17:51:54.852809   48198 command_runner.go:130] > storage_driver = "overlay"
	I0729 17:51:54.852820   48198 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 17:51:54.852832   48198 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 17:51:54.852841   48198 command_runner.go:130] > storage_option = [
	I0729 17:51:54.852849   48198 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 17:51:54.852857   48198 command_runner.go:130] > ]
	I0729 17:51:54.852867   48198 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 17:51:54.852880   48198 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 17:51:54.852890   48198 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 17:51:54.852902   48198 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 17:51:54.852913   48198 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 17:51:54.852920   48198 command_runner.go:130] > # always happen on a node reboot
	I0729 17:51:54.852925   48198 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 17:51:54.852939   48198 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 17:51:54.852946   48198 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 17:51:54.852952   48198 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 17:51:54.852958   48198 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 17:51:54.852966   48198 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 17:51:54.852977   48198 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 17:51:54.852983   48198 command_runner.go:130] > # internal_wipe = true
	I0729 17:51:54.852991   48198 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 17:51:54.852998   48198 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 17:51:54.853002   48198 command_runner.go:130] > # internal_repair = false
	I0729 17:51:54.853008   48198 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 17:51:54.853016   48198 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 17:51:54.853022   48198 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 17:51:54.853030   48198 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 17:51:54.853036   48198 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 17:51:54.853041   48198 command_runner.go:130] > [crio.api]
	I0729 17:51:54.853046   48198 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 17:51:54.853052   48198 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 17:51:54.853062   48198 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 17:51:54.853069   48198 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 17:51:54.853075   48198 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 17:51:54.853082   48198 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 17:51:54.853086   48198 command_runner.go:130] > # stream_port = "0"
	I0729 17:51:54.853093   48198 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 17:51:54.853101   48198 command_runner.go:130] > # stream_enable_tls = false
	I0729 17:51:54.853109   48198 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 17:51:54.853113   48198 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 17:51:54.853119   48198 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 17:51:54.853127   48198 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 17:51:54.853131   48198 command_runner.go:130] > # minutes.
	I0729 17:51:54.853135   48198 command_runner.go:130] > # stream_tls_cert = ""
	I0729 17:51:54.853143   48198 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 17:51:54.853151   48198 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 17:51:54.853155   48198 command_runner.go:130] > # stream_tls_key = ""
	I0729 17:51:54.853163   48198 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 17:51:54.853171   48198 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 17:51:54.853194   48198 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 17:51:54.853201   48198 command_runner.go:130] > # stream_tls_ca = ""
	I0729 17:51:54.853208   48198 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 17:51:54.853215   48198 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 17:51:54.853222   48198 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 17:51:54.853228   48198 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 17:51:54.853234   48198 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 17:51:54.853241   48198 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 17:51:54.853245   48198 command_runner.go:130] > [crio.runtime]
	I0729 17:51:54.853252   48198 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 17:51:54.853259   48198 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 17:51:54.853263   48198 command_runner.go:130] > # "nofile=1024:2048"
	I0729 17:51:54.853271   48198 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 17:51:54.853276   48198 command_runner.go:130] > # default_ulimits = [
	I0729 17:51:54.853281   48198 command_runner.go:130] > # ]
	I0729 17:51:54.853287   48198 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 17:51:54.853293   48198 command_runner.go:130] > # no_pivot = false
	I0729 17:51:54.853299   48198 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 17:51:54.853310   48198 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 17:51:54.853317   48198 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 17:51:54.853323   48198 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 17:51:54.853329   48198 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 17:51:54.853336   48198 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 17:51:54.853342   48198 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 17:51:54.853346   48198 command_runner.go:130] > # Cgroup setting for conmon
	I0729 17:51:54.853354   48198 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 17:51:54.853359   48198 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 17:51:54.853365   48198 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 17:51:54.853371   48198 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 17:51:54.853378   48198 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 17:51:54.853384   48198 command_runner.go:130] > conmon_env = [
	I0729 17:51:54.853390   48198 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 17:51:54.853395   48198 command_runner.go:130] > ]
	I0729 17:51:54.853401   48198 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 17:51:54.853407   48198 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 17:51:54.853412   48198 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 17:51:54.853418   48198 command_runner.go:130] > # default_env = [
	I0729 17:51:54.853421   48198 command_runner.go:130] > # ]
	I0729 17:51:54.853430   48198 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 17:51:54.853436   48198 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 17:51:54.853442   48198 command_runner.go:130] > # selinux = false
	I0729 17:51:54.853448   48198 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 17:51:54.853456   48198 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 17:51:54.853465   48198 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 17:51:54.853471   48198 command_runner.go:130] > # seccomp_profile = ""
	I0729 17:51:54.853477   48198 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 17:51:54.853484   48198 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 17:51:54.853492   48198 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 17:51:54.853496   48198 command_runner.go:130] > # which might increase security.
	I0729 17:51:54.853503   48198 command_runner.go:130] > # This option is currently deprecated,
	I0729 17:51:54.853508   48198 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 17:51:54.853515   48198 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 17:51:54.853520   48198 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 17:51:54.853528   48198 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 17:51:54.853538   48198 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 17:51:54.853546   48198 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 17:51:54.853554   48198 command_runner.go:130] > # This option supports live configuration reload.
	I0729 17:51:54.853560   48198 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 17:51:54.853566   48198 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 17:51:54.853573   48198 command_runner.go:130] > # the cgroup blockio controller.
	I0729 17:51:54.853577   48198 command_runner.go:130] > # blockio_config_file = ""
	I0729 17:51:54.853586   48198 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 17:51:54.853592   48198 command_runner.go:130] > # blockio parameters.
	I0729 17:51:54.853596   48198 command_runner.go:130] > # blockio_reload = false
	I0729 17:51:54.853604   48198 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 17:51:54.853613   48198 command_runner.go:130] > # irqbalance daemon.
	I0729 17:51:54.853620   48198 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 17:51:54.853625   48198 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 17:51:54.853636   48198 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 17:51:54.853645   48198 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 17:51:54.853652   48198 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 17:51:54.853660   48198 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 17:51:54.853666   48198 command_runner.go:130] > # This option supports live configuration reload.
	I0729 17:51:54.853672   48198 command_runner.go:130] > # rdt_config_file = ""
	I0729 17:51:54.853677   48198 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 17:51:54.853683   48198 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 17:51:54.853711   48198 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 17:51:54.853719   48198 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 17:51:54.853724   48198 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 17:51:54.853731   48198 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 17:51:54.853736   48198 command_runner.go:130] > # will be added.
	I0729 17:51:54.853741   48198 command_runner.go:130] > # default_capabilities = [
	I0729 17:51:54.853746   48198 command_runner.go:130] > # 	"CHOWN",
	I0729 17:51:54.853750   48198 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 17:51:54.853756   48198 command_runner.go:130] > # 	"FSETID",
	I0729 17:51:54.853759   48198 command_runner.go:130] > # 	"FOWNER",
	I0729 17:51:54.853765   48198 command_runner.go:130] > # 	"SETGID",
	I0729 17:51:54.853776   48198 command_runner.go:130] > # 	"SETUID",
	I0729 17:51:54.853785   48198 command_runner.go:130] > # 	"SETPCAP",
	I0729 17:51:54.853794   48198 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 17:51:54.853808   48198 command_runner.go:130] > # 	"KILL",
	I0729 17:51:54.853816   48198 command_runner.go:130] > # ]
	I0729 17:51:54.853829   48198 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 17:51:54.853843   48198 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 17:51:54.853853   48198 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 17:51:54.853865   48198 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 17:51:54.853875   48198 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 17:51:54.853882   48198 command_runner.go:130] > default_sysctls = [
	I0729 17:51:54.853886   48198 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 17:51:54.853892   48198 command_runner.go:130] > ]
	I0729 17:51:54.853896   48198 command_runner.go:130] > # List of devices on the host that a
	I0729 17:51:54.853904   48198 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 17:51:54.853910   48198 command_runner.go:130] > # allowed_devices = [
	I0729 17:51:54.853914   48198 command_runner.go:130] > # 	"/dev/fuse",
	I0729 17:51:54.853919   48198 command_runner.go:130] > # ]
	I0729 17:51:54.853923   48198 command_runner.go:130] > # List of additional devices. specified as
	I0729 17:51:54.853932   48198 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 17:51:54.853940   48198 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 17:51:54.853945   48198 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 17:51:54.853951   48198 command_runner.go:130] > # additional_devices = [
	I0729 17:51:54.853954   48198 command_runner.go:130] > # ]
	I0729 17:51:54.853959   48198 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 17:51:54.853965   48198 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 17:51:54.853969   48198 command_runner.go:130] > # 	"/etc/cdi",
	I0729 17:51:54.853975   48198 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 17:51:54.853979   48198 command_runner.go:130] > # ]
	I0729 17:51:54.853987   48198 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 17:51:54.853994   48198 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 17:51:54.853998   48198 command_runner.go:130] > # Defaults to false.
	I0729 17:51:54.854005   48198 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 17:51:54.854011   48198 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 17:51:54.854019   48198 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 17:51:54.854022   48198 command_runner.go:130] > # hooks_dir = [
	I0729 17:51:54.854027   48198 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 17:51:54.854031   48198 command_runner.go:130] > # ]
	I0729 17:51:54.854039   48198 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 17:51:54.854050   48198 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 17:51:54.854057   48198 command_runner.go:130] > # its default mounts from the following two files:
	I0729 17:51:54.854060   48198 command_runner.go:130] > #
	I0729 17:51:54.854066   48198 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 17:51:54.854075   48198 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 17:51:54.854082   48198 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 17:51:54.854085   48198 command_runner.go:130] > #
	I0729 17:51:54.854091   48198 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 17:51:54.854102   48198 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 17:51:54.854110   48198 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 17:51:54.854115   48198 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 17:51:54.854119   48198 command_runner.go:130] > #
	I0729 17:51:54.854123   48198 command_runner.go:130] > # default_mounts_file = ""
	I0729 17:51:54.854128   48198 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 17:51:54.854137   48198 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 17:51:54.854143   48198 command_runner.go:130] > pids_limit = 1024
	I0729 17:51:54.854148   48198 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 17:51:54.854156   48198 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 17:51:54.854163   48198 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 17:51:54.854173   48198 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 17:51:54.854178   48198 command_runner.go:130] > # log_size_max = -1
	I0729 17:51:54.854185   48198 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 17:51:54.854191   48198 command_runner.go:130] > # log_to_journald = false
	I0729 17:51:54.854197   48198 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 17:51:54.854204   48198 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 17:51:54.854209   48198 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 17:51:54.854216   48198 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 17:51:54.854221   48198 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 17:51:54.854227   48198 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 17:51:54.854232   48198 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 17:51:54.854238   48198 command_runner.go:130] > # read_only = false
	I0729 17:51:54.854243   48198 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 17:51:54.854251   48198 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 17:51:54.854258   48198 command_runner.go:130] > # live configuration reload.
	I0729 17:51:54.854262   48198 command_runner.go:130] > # log_level = "info"
	I0729 17:51:54.854269   48198 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 17:51:54.854279   48198 command_runner.go:130] > # This option supports live configuration reload.
	I0729 17:51:54.854285   48198 command_runner.go:130] > # log_filter = ""
	I0729 17:51:54.854291   48198 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 17:51:54.854300   48198 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 17:51:54.854306   48198 command_runner.go:130] > # separated by comma.
	I0729 17:51:54.854329   48198 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 17:51:54.854341   48198 command_runner.go:130] > # uid_mappings = ""
	I0729 17:51:54.854349   48198 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 17:51:54.854357   48198 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 17:51:54.854382   48198 command_runner.go:130] > # separated by comma.
	I0729 17:51:54.854391   48198 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 17:51:54.854398   48198 command_runner.go:130] > # gid_mappings = ""
	I0729 17:51:54.854404   48198 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 17:51:54.854412   48198 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 17:51:54.854420   48198 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 17:51:54.854429   48198 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 17:51:54.854435   48198 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 17:51:54.854441   48198 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 17:51:54.854449   48198 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 17:51:54.854457   48198 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 17:51:54.854464   48198 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 17:51:54.854470   48198 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 17:51:54.854476   48198 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 17:51:54.854484   48198 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 17:51:54.854491   48198 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 17:51:54.854495   48198 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 17:51:54.854501   48198 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 17:51:54.854509   48198 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 17:51:54.854516   48198 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 17:51:54.854520   48198 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 17:51:54.854526   48198 command_runner.go:130] > drop_infra_ctr = false
	I0729 17:51:54.854532   48198 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 17:51:54.854540   48198 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 17:51:54.854547   48198 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 17:51:54.854553   48198 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 17:51:54.854559   48198 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 17:51:54.854575   48198 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 17:51:54.854582   48198 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 17:51:54.854589   48198 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 17:51:54.854593   48198 command_runner.go:130] > # shared_cpuset = ""
	I0729 17:51:54.854601   48198 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 17:51:54.854606   48198 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 17:51:54.854612   48198 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 17:51:54.854618   48198 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 17:51:54.854624   48198 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 17:51:54.854630   48198 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 17:51:54.854637   48198 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 17:51:54.854641   48198 command_runner.go:130] > # enable_criu_support = false
	I0729 17:51:54.854648   48198 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 17:51:54.854654   48198 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 17:51:54.854660   48198 command_runner.go:130] > # enable_pod_events = false
	I0729 17:51:54.854665   48198 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 17:51:54.854673   48198 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 17:51:54.854679   48198 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 17:51:54.854684   48198 command_runner.go:130] > # default_runtime = "runc"
	I0729 17:51:54.854689   48198 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 17:51:54.854698   48198 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 17:51:54.854709   48198 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 17:51:54.854716   48198 command_runner.go:130] > # creation as a file is not desired either.
	I0729 17:51:54.854724   48198 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 17:51:54.854730   48198 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 17:51:54.854735   48198 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 17:51:54.854740   48198 command_runner.go:130] > # ]
	I0729 17:51:54.854746   48198 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 17:51:54.854754   48198 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 17:51:54.854761   48198 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 17:51:54.854770   48198 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 17:51:54.854779   48198 command_runner.go:130] > #
	I0729 17:51:54.854788   48198 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 17:51:54.854799   48198 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 17:51:54.854860   48198 command_runner.go:130] > # runtime_type = "oci"
	I0729 17:51:54.854870   48198 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 17:51:54.854879   48198 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 17:51:54.854886   48198 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 17:51:54.854890   48198 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 17:51:54.854896   48198 command_runner.go:130] > # monitor_env = []
	I0729 17:51:54.854901   48198 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 17:51:54.854905   48198 command_runner.go:130] > # allowed_annotations = []
	I0729 17:51:54.854912   48198 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 17:51:54.854915   48198 command_runner.go:130] > # Where:
	I0729 17:51:54.854921   48198 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 17:51:54.854929   48198 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 17:51:54.854935   48198 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 17:51:54.854943   48198 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 17:51:54.854947   48198 command_runner.go:130] > #   in $PATH.
	I0729 17:51:54.854954   48198 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 17:51:54.854960   48198 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 17:51:54.854966   48198 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 17:51:54.854971   48198 command_runner.go:130] > #   state.
	I0729 17:51:54.854978   48198 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 17:51:54.854985   48198 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 17:51:54.854994   48198 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 17:51:54.855002   48198 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 17:51:54.855008   48198 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 17:51:54.855016   48198 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 17:51:54.855023   48198 command_runner.go:130] > #   The currently recognized values are:
	I0729 17:51:54.855029   48198 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 17:51:54.855038   48198 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 17:51:54.855048   48198 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 17:51:54.855055   48198 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 17:51:54.855065   48198 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 17:51:54.855073   48198 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 17:51:54.855080   48198 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 17:51:54.855087   48198 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 17:51:54.855093   48198 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 17:51:54.855105   48198 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 17:51:54.855111   48198 command_runner.go:130] > #   deprecated option "conmon".
	I0729 17:51:54.855118   48198 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 17:51:54.855130   48198 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 17:51:54.855138   48198 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 17:51:54.855145   48198 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 17:51:54.855151   48198 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 17:51:54.855158   48198 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 17:51:54.855164   48198 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 17:51:54.855172   48198 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 17:51:54.855175   48198 command_runner.go:130] > #
	I0729 17:51:54.855180   48198 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 17:51:54.855185   48198 command_runner.go:130] > #
	I0729 17:51:54.855190   48198 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 17:51:54.855198   48198 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 17:51:54.855202   48198 command_runner.go:130] > #
	I0729 17:51:54.855208   48198 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 17:51:54.855216   48198 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 17:51:54.855221   48198 command_runner.go:130] > #
	I0729 17:51:54.855227   48198 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 17:51:54.855232   48198 command_runner.go:130] > # feature.
	I0729 17:51:54.855235   48198 command_runner.go:130] > #
	I0729 17:51:54.855240   48198 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 17:51:54.855248   48198 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 17:51:54.855254   48198 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 17:51:54.855261   48198 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 17:51:54.855267   48198 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 17:51:54.855272   48198 command_runner.go:130] > #
	I0729 17:51:54.855278   48198 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 17:51:54.855286   48198 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 17:51:54.855290   48198 command_runner.go:130] > #
	I0729 17:51:54.855296   48198 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 17:51:54.855303   48198 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 17:51:54.855307   48198 command_runner.go:130] > #
	I0729 17:51:54.855313   48198 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 17:51:54.855320   48198 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 17:51:54.855324   48198 command_runner.go:130] > # limitation.
	I0729 17:51:54.855332   48198 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 17:51:54.855338   48198 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 17:51:54.855346   48198 command_runner.go:130] > runtime_type = "oci"
	I0729 17:51:54.855353   48198 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 17:51:54.855357   48198 command_runner.go:130] > runtime_config_path = ""
	I0729 17:51:54.855363   48198 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 17:51:54.855367   48198 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 17:51:54.855374   48198 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 17:51:54.855378   48198 command_runner.go:130] > monitor_env = [
	I0729 17:51:54.855385   48198 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 17:51:54.855391   48198 command_runner.go:130] > ]
	I0729 17:51:54.855395   48198 command_runner.go:130] > privileged_without_host_devices = false
	I0729 17:51:54.855403   48198 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 17:51:54.855413   48198 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 17:51:54.855421   48198 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 17:51:54.855430   48198 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 17:51:54.855440   48198 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 17:51:54.855447   48198 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 17:51:54.855455   48198 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 17:51:54.855465   48198 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 17:51:54.855470   48198 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 17:51:54.855477   48198 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 17:51:54.855480   48198 command_runner.go:130] > # Example:
	I0729 17:51:54.855484   48198 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 17:51:54.855488   48198 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 17:51:54.855493   48198 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 17:51:54.855497   48198 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 17:51:54.855500   48198 command_runner.go:130] > # cpuset = 0
	I0729 17:51:54.855504   48198 command_runner.go:130] > # cpushares = "0-1"
	I0729 17:51:54.855507   48198 command_runner.go:130] > # Where:
	I0729 17:51:54.855511   48198 command_runner.go:130] > # The workload name is workload-type.
	I0729 17:51:54.855517   48198 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 17:51:54.855522   48198 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 17:51:54.855527   48198 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 17:51:54.855533   48198 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 17:51:54.855543   48198 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 17:51:54.855548   48198 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 17:51:54.855554   48198 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 17:51:54.855563   48198 command_runner.go:130] > # Default value is set to true
	I0729 17:51:54.855570   48198 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 17:51:54.855575   48198 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 17:51:54.855581   48198 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 17:51:54.855586   48198 command_runner.go:130] > # Default value is set to 'false'
	I0729 17:51:54.855591   48198 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 17:51:54.855597   48198 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 17:51:54.855602   48198 command_runner.go:130] > #
	I0729 17:51:54.855607   48198 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 17:51:54.855614   48198 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 17:51:54.855620   48198 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 17:51:54.855628   48198 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 17:51:54.855635   48198 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 17:51:54.855639   48198 command_runner.go:130] > [crio.image]
	I0729 17:51:54.855644   48198 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 17:51:54.855651   48198 command_runner.go:130] > # default_transport = "docker://"
	I0729 17:51:54.855656   48198 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 17:51:54.855665   48198 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 17:51:54.855671   48198 command_runner.go:130] > # global_auth_file = ""
	I0729 17:51:54.855676   48198 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 17:51:54.855682   48198 command_runner.go:130] > # This option supports live configuration reload.
	I0729 17:51:54.855687   48198 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 17:51:54.855695   48198 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 17:51:54.855703   48198 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 17:51:54.855708   48198 command_runner.go:130] > # This option supports live configuration reload.
	I0729 17:51:54.855714   48198 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 17:51:54.855719   48198 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 17:51:54.855726   48198 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 17:51:54.855734   48198 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 17:51:54.855741   48198 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 17:51:54.855745   48198 command_runner.go:130] > # pause_command = "/pause"
	I0729 17:51:54.855753   48198 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 17:51:54.855760   48198 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 17:51:54.855766   48198 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 17:51:54.855782   48198 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 17:51:54.855793   48198 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 17:51:54.855810   48198 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 17:51:54.855819   48198 command_runner.go:130] > # pinned_images = [
	I0729 17:51:54.855824   48198 command_runner.go:130] > # ]
	I0729 17:51:54.855835   48198 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 17:51:54.855848   48198 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 17:51:54.855861   48198 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 17:51:54.855872   48198 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 17:51:54.855880   48198 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 17:51:54.855883   48198 command_runner.go:130] > # signature_policy = ""
	I0729 17:51:54.855891   48198 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 17:51:54.855897   48198 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 17:51:54.855905   48198 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 17:51:54.855913   48198 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 17:51:54.855919   48198 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 17:51:54.855926   48198 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 17:51:54.855931   48198 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 17:51:54.855942   48198 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 17:51:54.855948   48198 command_runner.go:130] > # changing them here.
	I0729 17:51:54.855952   48198 command_runner.go:130] > # insecure_registries = [
	I0729 17:51:54.855958   48198 command_runner.go:130] > # ]
	I0729 17:51:54.855963   48198 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 17:51:54.855970   48198 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 17:51:54.855974   48198 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 17:51:54.855980   48198 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 17:51:54.855985   48198 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 17:51:54.855991   48198 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 17:51:54.855997   48198 command_runner.go:130] > # CNI plugins.
	I0729 17:51:54.856001   48198 command_runner.go:130] > [crio.network]
	I0729 17:51:54.856008   48198 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 17:51:54.856013   48198 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 17:51:54.856019   48198 command_runner.go:130] > # cni_default_network = ""
	I0729 17:51:54.856025   48198 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 17:51:54.856031   48198 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 17:51:54.856037   48198 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 17:51:54.856043   48198 command_runner.go:130] > # plugin_dirs = [
	I0729 17:51:54.856046   48198 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 17:51:54.856057   48198 command_runner.go:130] > # ]
	I0729 17:51:54.856065   48198 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 17:51:54.856071   48198 command_runner.go:130] > [crio.metrics]
	I0729 17:51:54.856075   48198 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 17:51:54.856081   48198 command_runner.go:130] > enable_metrics = true
	I0729 17:51:54.856086   48198 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 17:51:54.856092   48198 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 17:51:54.856102   48198 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 17:51:54.856110   48198 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 17:51:54.856118   48198 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 17:51:54.856124   48198 command_runner.go:130] > # metrics_collectors = [
	I0729 17:51:54.856128   48198 command_runner.go:130] > # 	"operations",
	I0729 17:51:54.856134   48198 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 17:51:54.856139   48198 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 17:51:54.856145   48198 command_runner.go:130] > # 	"operations_errors",
	I0729 17:51:54.856149   48198 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 17:51:54.856155   48198 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 17:51:54.856159   48198 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 17:51:54.856166   48198 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 17:51:54.856169   48198 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 17:51:54.856173   48198 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 17:51:54.856179   48198 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 17:51:54.856184   48198 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 17:51:54.856190   48198 command_runner.go:130] > # 	"containers_oom_total",
	I0729 17:51:54.856194   48198 command_runner.go:130] > # 	"containers_oom",
	I0729 17:51:54.856200   48198 command_runner.go:130] > # 	"processes_defunct",
	I0729 17:51:54.856204   48198 command_runner.go:130] > # 	"operations_total",
	I0729 17:51:54.856211   48198 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 17:51:54.856215   48198 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 17:51:54.856221   48198 command_runner.go:130] > # 	"operations_errors_total",
	I0729 17:51:54.856225   48198 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 17:51:54.856232   48198 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 17:51:54.856236   48198 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 17:51:54.856242   48198 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 17:51:54.856246   48198 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 17:51:54.856252   48198 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 17:51:54.856261   48198 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 17:51:54.856267   48198 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 17:51:54.856270   48198 command_runner.go:130] > # ]
	I0729 17:51:54.856277   48198 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 17:51:54.856281   48198 command_runner.go:130] > # metrics_port = 9090
	I0729 17:51:54.856288   48198 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 17:51:54.856292   48198 command_runner.go:130] > # metrics_socket = ""
	I0729 17:51:54.856299   48198 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 17:51:54.856307   48198 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 17:51:54.856314   48198 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 17:51:54.856324   48198 command_runner.go:130] > # certificate on any modification event.
	I0729 17:51:54.856330   48198 command_runner.go:130] > # metrics_cert = ""
	I0729 17:51:54.856334   48198 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 17:51:54.856341   48198 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 17:51:54.856345   48198 command_runner.go:130] > # metrics_key = ""
	I0729 17:51:54.856350   48198 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 17:51:54.856356   48198 command_runner.go:130] > [crio.tracing]
	I0729 17:51:54.856361   48198 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 17:51:54.856367   48198 command_runner.go:130] > # enable_tracing = false
	I0729 17:51:54.856372   48198 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 17:51:54.856379   48198 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 17:51:54.856385   48198 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 17:51:54.856392   48198 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 17:51:54.856396   48198 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 17:51:54.856399   48198 command_runner.go:130] > [crio.nri]
	I0729 17:51:54.856406   48198 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 17:51:54.856410   48198 command_runner.go:130] > # enable_nri = false
	I0729 17:51:54.856416   48198 command_runner.go:130] > # NRI socket to listen on.
	I0729 17:51:54.856421   48198 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 17:51:54.856427   48198 command_runner.go:130] > # NRI plugin directory to use.
	I0729 17:51:54.856431   48198 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 17:51:54.856437   48198 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 17:51:54.856442   48198 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 17:51:54.856449   48198 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 17:51:54.856453   48198 command_runner.go:130] > # nri_disable_connections = false
	I0729 17:51:54.856460   48198 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 17:51:54.856474   48198 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 17:51:54.856481   48198 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 17:51:54.856485   48198 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 17:51:54.856493   48198 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 17:51:54.856498   48198 command_runner.go:130] > [crio.stats]
	I0729 17:51:54.856504   48198 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 17:51:54.856511   48198 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 17:51:54.856515   48198 command_runner.go:130] > # stats_collection_period = 0
	I0729 17:51:54.856667   48198 cni.go:84] Creating CNI manager for ""
	I0729 17:51:54.856682   48198 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 17:51:54.856693   48198 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 17:51:54.856713   48198 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.218 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-602258 NodeName:multinode-602258 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 17:51:54.856878   48198 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-602258"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 17:51:54.856953   48198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:51:54.866771   48198 command_runner.go:130] > kubeadm
	I0729 17:51:54.866792   48198 command_runner.go:130] > kubectl
	I0729 17:51:54.866799   48198 command_runner.go:130] > kubelet
	I0729 17:51:54.866856   48198 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 17:51:54.866919   48198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 17:51:54.876681   48198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0729 17:51:54.894889   48198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:51:54.912236   48198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0729 17:51:54.929376   48198 ssh_runner.go:195] Run: grep 192.168.39.218	control-plane.minikube.internal$ /etc/hosts
	I0729 17:51:54.933233   48198 command_runner.go:130] > 192.168.39.218	control-plane.minikube.internal
	I0729 17:51:54.933307   48198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:51:55.072715   48198 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:51:55.087326   48198 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258 for IP: 192.168.39.218
	I0729 17:51:55.087349   48198 certs.go:194] generating shared ca certs ...
	I0729 17:51:55.087364   48198 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:51:55.087565   48198 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 17:51:55.087619   48198 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 17:51:55.087636   48198 certs.go:256] generating profile certs ...
	I0729 17:51:55.087784   48198 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/client.key
	I0729 17:51:55.087868   48198 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/apiserver.key.b59fdcf4
	I0729 17:51:55.087937   48198 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/proxy-client.key
	I0729 17:51:55.087950   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 17:51:55.087972   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 17:51:55.087990   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 17:51:55.088007   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 17:51:55.088023   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 17:51:55.088042   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 17:51:55.088060   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 17:51:55.088078   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 17:51:55.088145   48198 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 17:51:55.088186   48198 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 17:51:55.088199   48198 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:51:55.088230   48198 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:51:55.088263   48198 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:51:55.088295   48198 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 17:51:55.088346   48198 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 17:51:55.088383   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> /usr/share/ca-certificates/183932.pem
	I0729 17:51:55.088404   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:55.088422   48198 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem -> /usr/share/ca-certificates/18393.pem
	I0729 17:51:55.089082   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:51:55.114185   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:51:55.137738   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:51:55.161184   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:51:55.185348   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 17:51:55.209100   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 17:51:55.232524   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:51:55.255926   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/multinode-602258/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:51:55.279764   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 17:51:55.303484   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:51:55.328159   48198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 17:51:55.351624   48198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 17:51:55.367514   48198 ssh_runner.go:195] Run: openssl version
	I0729 17:51:55.373185   48198 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 17:51:55.373252   48198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 17:51:55.383877   48198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 17:51:55.388302   48198 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 17:51:55.388329   48198 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 17:51:55.388362   48198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 17:51:55.393815   48198 command_runner.go:130] > 3ec20f2e
	I0729 17:51:55.393885   48198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:51:55.402943   48198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:51:55.413336   48198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:55.418157   48198 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:55.418409   48198 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:55.418484   48198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:55.423838   48198 command_runner.go:130] > b5213941
	I0729 17:51:55.424086   48198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:51:55.433374   48198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 17:51:55.443924   48198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 17:51:55.448188   48198 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 17:51:55.448325   48198 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 17:51:55.448375   48198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 17:51:55.453780   48198 command_runner.go:130] > 51391683
	I0729 17:51:55.453844   48198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 17:51:55.462747   48198 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:51:55.466995   48198 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:51:55.467015   48198 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 17:51:55.467021   48198 command_runner.go:130] > Device: 253,1	Inode: 4197931     Links: 1
	I0729 17:51:55.467027   48198 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 17:51:55.467034   48198 command_runner.go:130] > Access: 2024-07-29 17:45:07.159583524 +0000
	I0729 17:51:55.467038   48198 command_runner.go:130] > Modify: 2024-07-29 17:45:07.159583524 +0000
	I0729 17:51:55.467043   48198 command_runner.go:130] > Change: 2024-07-29 17:45:07.159583524 +0000
	I0729 17:51:55.467048   48198 command_runner.go:130] >  Birth: 2024-07-29 17:45:07.159583524 +0000
	I0729 17:51:55.467245   48198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 17:51:55.472705   48198 command_runner.go:130] > Certificate will not expire
	I0729 17:51:55.472769   48198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 17:51:55.478549   48198 command_runner.go:130] > Certificate will not expire
	I0729 17:51:55.478603   48198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 17:51:55.484042   48198 command_runner.go:130] > Certificate will not expire
	I0729 17:51:55.484234   48198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 17:51:55.489741   48198 command_runner.go:130] > Certificate will not expire
	I0729 17:51:55.489804   48198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 17:51:55.494967   48198 command_runner.go:130] > Certificate will not expire
	I0729 17:51:55.495242   48198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 17:51:55.500489   48198 command_runner.go:130] > Certificate will not expire
	I0729 17:51:55.500729   48198 kubeadm.go:392] StartCluster: {Name:multinode-602258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-602258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:51:55.500869   48198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 17:51:55.500921   48198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 17:51:55.540098   48198 command_runner.go:130] > 2322d2050e81876dd49cf2144c141ed8528c2c549e7dd61e0c48e22f29649330
	I0729 17:51:55.540128   48198 command_runner.go:130] > 7416acdd88a7db531e03ee5767e76470f46b78db5c45ef842821db5036503517
	I0729 17:51:55.540134   48198 command_runner.go:130] > d7615041ffc1aef8098e347b0f7f11240291b7814bf03e59b1e336bc7d7bb7c0
	I0729 17:51:55.540140   48198 command_runner.go:130] > 864297549b1272800bfebdd28175a349b3ee8ef7c7bbd78c771eaad9e02b25cc
	I0729 17:51:55.540149   48198 command_runner.go:130] > e82eb1db29cc5f5d61344dd7ed6985093a7f202f4cdac7abf12ba859cab24ac6
	I0729 17:51:55.540157   48198 command_runner.go:130] > 07fee3a17c566e898bf4bda366cd3fef0865591a42bdfbf1d81036e598ac14ce
	I0729 17:51:55.540167   48198 command_runner.go:130] > 6e7844975c2969533b47b744772eb44171fe78572163dc172999a28d44fcf4ee
	I0729 17:51:55.540177   48198 command_runner.go:130] > 1f624d4b42189dfe667a9aad521e37765c1b61fa5a1300f05f7a937db2c6a6fa
	I0729 17:51:55.541408   48198 cri.go:89] found id: "2322d2050e81876dd49cf2144c141ed8528c2c549e7dd61e0c48e22f29649330"
	I0729 17:51:55.541427   48198 cri.go:89] found id: "7416acdd88a7db531e03ee5767e76470f46b78db5c45ef842821db5036503517"
	I0729 17:51:55.541434   48198 cri.go:89] found id: "d7615041ffc1aef8098e347b0f7f11240291b7814bf03e59b1e336bc7d7bb7c0"
	I0729 17:51:55.541439   48198 cri.go:89] found id: "864297549b1272800bfebdd28175a349b3ee8ef7c7bbd78c771eaad9e02b25cc"
	I0729 17:51:55.541442   48198 cri.go:89] found id: "e82eb1db29cc5f5d61344dd7ed6985093a7f202f4cdac7abf12ba859cab24ac6"
	I0729 17:51:55.541447   48198 cri.go:89] found id: "07fee3a17c566e898bf4bda366cd3fef0865591a42bdfbf1d81036e598ac14ce"
	I0729 17:51:55.541451   48198 cri.go:89] found id: "6e7844975c2969533b47b744772eb44171fe78572163dc172999a28d44fcf4ee"
	I0729 17:51:55.541455   48198 cri.go:89] found id: "1f624d4b42189dfe667a9aad521e37765c1b61fa5a1300f05f7a937db2c6a6fa"
	I0729 17:51:55.541459   48198 cri.go:89] found id: ""
	I0729 17:51:55.541506   48198 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.527907514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722275763527873053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c19cf7f-0f36-4137-9eae-1e2343323837 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.528868898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec7a4119-f9f8-4024-94f5-00343e2375c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.528942678Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec7a4119-f9f8-4024-94f5-00343e2375c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.529395719Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94d253c5fc168cec3e4a288f1ff892488ebb63bd43a64cb2b364daeef4a42092,PodSandboxId:ecba257d3f2a1cd221ec4dd1fb5570367cb55f9177de6c5bdf28b9fb345816e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722275556517307823,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kqrzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c31cd36-a917-4a07-a18f-887c7defa6e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1799863e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3426a69c32cd243e491fea11995a1bf631263c68a605d31e6a21b97ce4d0ac4b,PodSandboxId:3cda1a27a1fe30cd41fb7cc9a711e9d0de01ff7422bf7c70ce897f67901c7a7b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722275523193375217,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-68dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700c5f4f-8bac-4a69-8174-0b8a80c4e831,},Annotations:map[string]string{io.kubernetes.container.hash: ef2bce6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a7ed335c280800a15790e3d10b98a97080fe5a90197bbc732c626cfdd89f67a,PodSandboxId:b44765ca91bab1000cb666b4a381cef645dd4299547f17c543e4595f9e2277a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275523118117406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7fmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbbeed00-0740-41dc-b9f2-aa03336074ac,},Annotations:map[string]string{io.kubernetes.container.hash: be4c111a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1a3774da5a1b300885fa869b6ee486244da9331c7388294b88d4cc568c1065,PodSandboxId:cb284998c1c34179968f91ccb58ab4392545eca3a67410e7f46cb24d6acdd73d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722275522962364647,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shhsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8951fee7-e31c-401a-8688-79487ea5fc64,},Annotations:map[string]
string{io.kubernetes.container.hash: 5dbfc197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171b21ffde4794280075b3f5b4b787a263f302a1fb712bb15b69cba1cefe437d,PodSandboxId:73fc019512997562eddb104934baa7fb9afdb42ec0424043d15835a1801a723d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722275522884930368,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dee56b25-3f87-483c-8fda-95989162e3ba,},Annotations:map[string]string{io.ku
bernetes.container.hash: cdf99c0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b655e5789fb522e6dae942b61ff483971a837d40607e3530bc9c1ae524e627e1,PodSandboxId:356e8fa20bee0469a0aa2c1a1c427a032fea595ab55791571b243d5dd1895e79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722275518023760140,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91128e09c1b43339a5edc267d8a2607c,},Annotations:map[string]string{io.kubernetes.container.hash: 93b97758,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619d14875058f9bdafffdb8f819f0dbada1c276a2b0c1a22286f2a986be363bf,PodSandboxId:8716f9d02e68514d22182809651ba202d94a52d2e187db3894ee2c47c4a3282c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722275517994184016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74794ee3688afe14ba4fbb763c9f1f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3c8ef53e88607de4cee229954a3b64b2c31ca195b03e2e83e6b390b674f06a,PodSandboxId:e574e4d95b35c760042756833f92daa0a5169e957cf29184fedb43ab6fedaa71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722275517930270247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfb26bf40408c8779988df3a1b3dbe66,},Annotations:map[string]string{io.kubernetes.container.hash: 76ddb303,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f75287ad92d6d5f2e5b7da85de7858362a02e33879b2c77184aafed885e2e0d,PodSandboxId:72e988dc049fa83ebf7b5971055dadbc6fe84fe29f0c3ee4d33c920aa33d0ef4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722275517908930153,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 987b84de9f64c76a1f7b604c11dc5ffd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f87e33e730abde22af3b74af0ab2bd17918944204a4b158207fa583b003b10b9,PodSandboxId:37d9e64592cd7aaa1505dcddccbc5dd067152f471067fd6873fddcbaf957738a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722275199589060401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kqrzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c31cd36-a917-4a07-a18f-887c7defa6e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1799863e,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2322d2050e81876dd49cf2144c141ed8528c2c549e7dd61e0c48e22f29649330,PodSandboxId:ee8980b3e7f2b194bf74c3732dd35f28032437e5a232010aa7d6c37542186709,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275144901037514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7fmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbbeed00-0740-41dc-b9f2-aa03336074ac,},Annotations:map[string]string{io.kubernetes.container.hash: be4c111a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7416acdd88a7db531e03ee5767e76470f46b78db5c45ef842821db5036503517,PodSandboxId:444cd6c8b4e011244c073f7937a14b413182cb3cea8fc4ffafab8ea2fe27b8d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722275144858336451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: dee56b25-3f87-483c-8fda-95989162e3ba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf99c0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7615041ffc1aef8098e347b0f7f11240291b7814bf03e59b1e336bc7d7bb7c0,PodSandboxId:fd87ad0c3835a9427b7071df83e4698fd7627224603c974a5578338a3878b88c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722275133246875325,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-68dnv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 700c5f4f-8bac-4a69-8174-0b8a80c4e831,},Annotations:map[string]string{io.kubernetes.container.hash: ef2bce6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864297549b1272800bfebdd28175a349b3ee8ef7c7bbd78c771eaad9e02b25cc,PodSandboxId:df1d9d917d4d21947539533f40d12eb1b40d87fe3253db110c799825ab064153,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722275131100162272,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shhsx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8951fee7-e31c-401a-8688-79487ea5fc64,},Annotations:map[string]string{io.kubernetes.container.hash: 5dbfc197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82eb1db29cc5f5d61344dd7ed6985093a7f202f4cdac7abf12ba859cab24ac6,PodSandboxId:fd621ebddac51317684d0de146954e509296749a3949dffd5d6da406fa9f7efd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722275111299056055,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
74794ee3688afe14ba4fbb763c9f1f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7844975c2969533b47b744772eb44171fe78572163dc172999a28d44fcf4ee,PodSandboxId:b63a4e8c712bf07ba90c52a91a51b4a32c0505aab36ca15f5a09f7d3a15117b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722275111235776829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91128e09c1b43339a5edc267d8a2607c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 93b97758,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07fee3a17c566e898bf4bda366cd3fef0865591a42bdfbf1d81036e598ac14ce,PodSandboxId:d573150cedc5641a9fb0a3a4cfb625233a2f4626ead42fee1318995d61551222,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722275111245721916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfb26bf40408c8779988df3a1b3dbe66,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 76ddb303,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f624d4b42189dfe667a9aad521e37765c1b61fa5a1300f05f7a937db2c6a6fa,PodSandboxId:da889349b706f3293f3f67b781023c77afe739743aceecb64997b2e77d5d49a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722275111216522794,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 987b84de9f64c76a1f7b604c11dc5ffd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec7a4119-f9f8-4024-94f5-00343e2375c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.573928368Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c3675127-6aa3-4f7f-964d-78a37664524a name=/runtime.v1.RuntimeService/Version
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.574000730Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3675127-6aa3-4f7f-964d-78a37664524a name=/runtime.v1.RuntimeService/Version
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.576191731Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48b08ae7-e619-4878-ad02-4fad531196cc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.576691079Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722275763576669676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48b08ae7-e619-4878-ad02-4fad531196cc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.578074907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e39ac42-8021-494f-b7bb-9b0f297c0806 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.578129928Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e39ac42-8021-494f-b7bb-9b0f297c0806 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.578560302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94d253c5fc168cec3e4a288f1ff892488ebb63bd43a64cb2b364daeef4a42092,PodSandboxId:ecba257d3f2a1cd221ec4dd1fb5570367cb55f9177de6c5bdf28b9fb345816e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722275556517307823,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kqrzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c31cd36-a917-4a07-a18f-887c7defa6e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1799863e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3426a69c32cd243e491fea11995a1bf631263c68a605d31e6a21b97ce4d0ac4b,PodSandboxId:3cda1a27a1fe30cd41fb7cc9a711e9d0de01ff7422bf7c70ce897f67901c7a7b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722275523193375217,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-68dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700c5f4f-8bac-4a69-8174-0b8a80c4e831,},Annotations:map[string]string{io.kubernetes.container.hash: ef2bce6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a7ed335c280800a15790e3d10b98a97080fe5a90197bbc732c626cfdd89f67a,PodSandboxId:b44765ca91bab1000cb666b4a381cef645dd4299547f17c543e4595f9e2277a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275523118117406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7fmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbbeed00-0740-41dc-b9f2-aa03336074ac,},Annotations:map[string]string{io.kubernetes.container.hash: be4c111a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1a3774da5a1b300885fa869b6ee486244da9331c7388294b88d4cc568c1065,PodSandboxId:cb284998c1c34179968f91ccb58ab4392545eca3a67410e7f46cb24d6acdd73d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722275522962364647,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shhsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8951fee7-e31c-401a-8688-79487ea5fc64,},Annotations:map[string]
string{io.kubernetes.container.hash: 5dbfc197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171b21ffde4794280075b3f5b4b787a263f302a1fb712bb15b69cba1cefe437d,PodSandboxId:73fc019512997562eddb104934baa7fb9afdb42ec0424043d15835a1801a723d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722275522884930368,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dee56b25-3f87-483c-8fda-95989162e3ba,},Annotations:map[string]string{io.ku
bernetes.container.hash: cdf99c0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b655e5789fb522e6dae942b61ff483971a837d40607e3530bc9c1ae524e627e1,PodSandboxId:356e8fa20bee0469a0aa2c1a1c427a032fea595ab55791571b243d5dd1895e79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722275518023760140,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91128e09c1b43339a5edc267d8a2607c,},Annotations:map[string]string{io.kubernetes.container.hash: 93b97758,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619d14875058f9bdafffdb8f819f0dbada1c276a2b0c1a22286f2a986be363bf,PodSandboxId:8716f9d02e68514d22182809651ba202d94a52d2e187db3894ee2c47c4a3282c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722275517994184016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74794ee3688afe14ba4fbb763c9f1f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3c8ef53e88607de4cee229954a3b64b2c31ca195b03e2e83e6b390b674f06a,PodSandboxId:e574e4d95b35c760042756833f92daa0a5169e957cf29184fedb43ab6fedaa71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722275517930270247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfb26bf40408c8779988df3a1b3dbe66,},Annotations:map[string]string{io.kubernetes.container.hash: 76ddb303,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f75287ad92d6d5f2e5b7da85de7858362a02e33879b2c77184aafed885e2e0d,PodSandboxId:72e988dc049fa83ebf7b5971055dadbc6fe84fe29f0c3ee4d33c920aa33d0ef4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722275517908930153,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 987b84de9f64c76a1f7b604c11dc5ffd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f87e33e730abde22af3b74af0ab2bd17918944204a4b158207fa583b003b10b9,PodSandboxId:37d9e64592cd7aaa1505dcddccbc5dd067152f471067fd6873fddcbaf957738a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722275199589060401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kqrzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c31cd36-a917-4a07-a18f-887c7defa6e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1799863e,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2322d2050e81876dd49cf2144c141ed8528c2c549e7dd61e0c48e22f29649330,PodSandboxId:ee8980b3e7f2b194bf74c3732dd35f28032437e5a232010aa7d6c37542186709,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275144901037514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7fmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbbeed00-0740-41dc-b9f2-aa03336074ac,},Annotations:map[string]string{io.kubernetes.container.hash: be4c111a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7416acdd88a7db531e03ee5767e76470f46b78db5c45ef842821db5036503517,PodSandboxId:444cd6c8b4e011244c073f7937a14b413182cb3cea8fc4ffafab8ea2fe27b8d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722275144858336451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: dee56b25-3f87-483c-8fda-95989162e3ba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf99c0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7615041ffc1aef8098e347b0f7f11240291b7814bf03e59b1e336bc7d7bb7c0,PodSandboxId:fd87ad0c3835a9427b7071df83e4698fd7627224603c974a5578338a3878b88c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722275133246875325,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-68dnv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 700c5f4f-8bac-4a69-8174-0b8a80c4e831,},Annotations:map[string]string{io.kubernetes.container.hash: ef2bce6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864297549b1272800bfebdd28175a349b3ee8ef7c7bbd78c771eaad9e02b25cc,PodSandboxId:df1d9d917d4d21947539533f40d12eb1b40d87fe3253db110c799825ab064153,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722275131100162272,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shhsx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8951fee7-e31c-401a-8688-79487ea5fc64,},Annotations:map[string]string{io.kubernetes.container.hash: 5dbfc197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82eb1db29cc5f5d61344dd7ed6985093a7f202f4cdac7abf12ba859cab24ac6,PodSandboxId:fd621ebddac51317684d0de146954e509296749a3949dffd5d6da406fa9f7efd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722275111299056055,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
74794ee3688afe14ba4fbb763c9f1f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7844975c2969533b47b744772eb44171fe78572163dc172999a28d44fcf4ee,PodSandboxId:b63a4e8c712bf07ba90c52a91a51b4a32c0505aab36ca15f5a09f7d3a15117b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722275111235776829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91128e09c1b43339a5edc267d8a2607c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 93b97758,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07fee3a17c566e898bf4bda366cd3fef0865591a42bdfbf1d81036e598ac14ce,PodSandboxId:d573150cedc5641a9fb0a3a4cfb625233a2f4626ead42fee1318995d61551222,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722275111245721916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfb26bf40408c8779988df3a1b3dbe66,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 76ddb303,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f624d4b42189dfe667a9aad521e37765c1b61fa5a1300f05f7a937db2c6a6fa,PodSandboxId:da889349b706f3293f3f67b781023c77afe739743aceecb64997b2e77d5d49a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722275111216522794,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 987b84de9f64c76a1f7b604c11dc5ffd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e39ac42-8021-494f-b7bb-9b0f297c0806 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.619943937Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=261bb1ae-4f57-4046-87d6-fcc2c27fa679 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.620029407Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=261bb1ae-4f57-4046-87d6-fcc2c27fa679 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.621317562Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f6b791b-bc71-4610-8b1e-bc62149b227f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.621745639Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722275763621722415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f6b791b-bc71-4610-8b1e-bc62149b227f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.622413377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e229df56-364e-4ac0-bab4-a84ea9092642 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.622489004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e229df56-364e-4ac0-bab4-a84ea9092642 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.622843576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94d253c5fc168cec3e4a288f1ff892488ebb63bd43a64cb2b364daeef4a42092,PodSandboxId:ecba257d3f2a1cd221ec4dd1fb5570367cb55f9177de6c5bdf28b9fb345816e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722275556517307823,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kqrzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c31cd36-a917-4a07-a18f-887c7defa6e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1799863e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3426a69c32cd243e491fea11995a1bf631263c68a605d31e6a21b97ce4d0ac4b,PodSandboxId:3cda1a27a1fe30cd41fb7cc9a711e9d0de01ff7422bf7c70ce897f67901c7a7b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722275523193375217,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-68dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700c5f4f-8bac-4a69-8174-0b8a80c4e831,},Annotations:map[string]string{io.kubernetes.container.hash: ef2bce6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a7ed335c280800a15790e3d10b98a97080fe5a90197bbc732c626cfdd89f67a,PodSandboxId:b44765ca91bab1000cb666b4a381cef645dd4299547f17c543e4595f9e2277a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275523118117406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7fmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbbeed00-0740-41dc-b9f2-aa03336074ac,},Annotations:map[string]string{io.kubernetes.container.hash: be4c111a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1a3774da5a1b300885fa869b6ee486244da9331c7388294b88d4cc568c1065,PodSandboxId:cb284998c1c34179968f91ccb58ab4392545eca3a67410e7f46cb24d6acdd73d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722275522962364647,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shhsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8951fee7-e31c-401a-8688-79487ea5fc64,},Annotations:map[string]
string{io.kubernetes.container.hash: 5dbfc197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171b21ffde4794280075b3f5b4b787a263f302a1fb712bb15b69cba1cefe437d,PodSandboxId:73fc019512997562eddb104934baa7fb9afdb42ec0424043d15835a1801a723d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722275522884930368,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dee56b25-3f87-483c-8fda-95989162e3ba,},Annotations:map[string]string{io.ku
bernetes.container.hash: cdf99c0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b655e5789fb522e6dae942b61ff483971a837d40607e3530bc9c1ae524e627e1,PodSandboxId:356e8fa20bee0469a0aa2c1a1c427a032fea595ab55791571b243d5dd1895e79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722275518023760140,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91128e09c1b43339a5edc267d8a2607c,},Annotations:map[string]string{io.kubernetes.container.hash: 93b97758,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619d14875058f9bdafffdb8f819f0dbada1c276a2b0c1a22286f2a986be363bf,PodSandboxId:8716f9d02e68514d22182809651ba202d94a52d2e187db3894ee2c47c4a3282c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722275517994184016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74794ee3688afe14ba4fbb763c9f1f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3c8ef53e88607de4cee229954a3b64b2c31ca195b03e2e83e6b390b674f06a,PodSandboxId:e574e4d95b35c760042756833f92daa0a5169e957cf29184fedb43ab6fedaa71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722275517930270247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfb26bf40408c8779988df3a1b3dbe66,},Annotations:map[string]string{io.kubernetes.container.hash: 76ddb303,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f75287ad92d6d5f2e5b7da85de7858362a02e33879b2c77184aafed885e2e0d,PodSandboxId:72e988dc049fa83ebf7b5971055dadbc6fe84fe29f0c3ee4d33c920aa33d0ef4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722275517908930153,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 987b84de9f64c76a1f7b604c11dc5ffd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f87e33e730abde22af3b74af0ab2bd17918944204a4b158207fa583b003b10b9,PodSandboxId:37d9e64592cd7aaa1505dcddccbc5dd067152f471067fd6873fddcbaf957738a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722275199589060401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kqrzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c31cd36-a917-4a07-a18f-887c7defa6e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1799863e,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2322d2050e81876dd49cf2144c141ed8528c2c549e7dd61e0c48e22f29649330,PodSandboxId:ee8980b3e7f2b194bf74c3732dd35f28032437e5a232010aa7d6c37542186709,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275144901037514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7fmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbbeed00-0740-41dc-b9f2-aa03336074ac,},Annotations:map[string]string{io.kubernetes.container.hash: be4c111a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7416acdd88a7db531e03ee5767e76470f46b78db5c45ef842821db5036503517,PodSandboxId:444cd6c8b4e011244c073f7937a14b413182cb3cea8fc4ffafab8ea2fe27b8d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722275144858336451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: dee56b25-3f87-483c-8fda-95989162e3ba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf99c0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7615041ffc1aef8098e347b0f7f11240291b7814bf03e59b1e336bc7d7bb7c0,PodSandboxId:fd87ad0c3835a9427b7071df83e4698fd7627224603c974a5578338a3878b88c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722275133246875325,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-68dnv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 700c5f4f-8bac-4a69-8174-0b8a80c4e831,},Annotations:map[string]string{io.kubernetes.container.hash: ef2bce6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864297549b1272800bfebdd28175a349b3ee8ef7c7bbd78c771eaad9e02b25cc,PodSandboxId:df1d9d917d4d21947539533f40d12eb1b40d87fe3253db110c799825ab064153,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722275131100162272,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shhsx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8951fee7-e31c-401a-8688-79487ea5fc64,},Annotations:map[string]string{io.kubernetes.container.hash: 5dbfc197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82eb1db29cc5f5d61344dd7ed6985093a7f202f4cdac7abf12ba859cab24ac6,PodSandboxId:fd621ebddac51317684d0de146954e509296749a3949dffd5d6da406fa9f7efd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722275111299056055,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
74794ee3688afe14ba4fbb763c9f1f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7844975c2969533b47b744772eb44171fe78572163dc172999a28d44fcf4ee,PodSandboxId:b63a4e8c712bf07ba90c52a91a51b4a32c0505aab36ca15f5a09f7d3a15117b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722275111235776829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91128e09c1b43339a5edc267d8a2607c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 93b97758,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07fee3a17c566e898bf4bda366cd3fef0865591a42bdfbf1d81036e598ac14ce,PodSandboxId:d573150cedc5641a9fb0a3a4cfb625233a2f4626ead42fee1318995d61551222,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722275111245721916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfb26bf40408c8779988df3a1b3dbe66,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 76ddb303,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f624d4b42189dfe667a9aad521e37765c1b61fa5a1300f05f7a937db2c6a6fa,PodSandboxId:da889349b706f3293f3f67b781023c77afe739743aceecb64997b2e77d5d49a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722275111216522794,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 987b84de9f64c76a1f7b604c11dc5ffd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e229df56-364e-4ac0-bab4-a84ea9092642 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.662949100Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e232cd0-327c-4012-b595-efdfa3f8e7bc name=/runtime.v1.RuntimeService/Version
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.663045994Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e232cd0-327c-4012-b595-efdfa3f8e7bc name=/runtime.v1.RuntimeService/Version
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.663961455Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=523171ef-4e9b-4fb2-b932-87ab27b1df10 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.664507783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722275763664476242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=523171ef-4e9b-4fb2-b932-87ab27b1df10 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.665078793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87113999-21b1-459a-9cc3-9ddea82a287e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.665138276Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87113999-21b1-459a-9cc3-9ddea82a287e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 multinode-602258 crio[2883]: time="2024-07-29 17:56:03.669631637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94d253c5fc168cec3e4a288f1ff892488ebb63bd43a64cb2b364daeef4a42092,PodSandboxId:ecba257d3f2a1cd221ec4dd1fb5570367cb55f9177de6c5bdf28b9fb345816e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722275556517307823,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kqrzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c31cd36-a917-4a07-a18f-887c7defa6e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1799863e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3426a69c32cd243e491fea11995a1bf631263c68a605d31e6a21b97ce4d0ac4b,PodSandboxId:3cda1a27a1fe30cd41fb7cc9a711e9d0de01ff7422bf7c70ce897f67901c7a7b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722275523193375217,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-68dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700c5f4f-8bac-4a69-8174-0b8a80c4e831,},Annotations:map[string]string{io.kubernetes.container.hash: ef2bce6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a7ed335c280800a15790e3d10b98a97080fe5a90197bbc732c626cfdd89f67a,PodSandboxId:b44765ca91bab1000cb666b4a381cef645dd4299547f17c543e4595f9e2277a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275523118117406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7fmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbbeed00-0740-41dc-b9f2-aa03336074ac,},Annotations:map[string]string{io.kubernetes.container.hash: be4c111a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1a3774da5a1b300885fa869b6ee486244da9331c7388294b88d4cc568c1065,PodSandboxId:cb284998c1c34179968f91ccb58ab4392545eca3a67410e7f46cb24d6acdd73d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722275522962364647,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shhsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8951fee7-e31c-401a-8688-79487ea5fc64,},Annotations:map[string]
string{io.kubernetes.container.hash: 5dbfc197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171b21ffde4794280075b3f5b4b787a263f302a1fb712bb15b69cba1cefe437d,PodSandboxId:73fc019512997562eddb104934baa7fb9afdb42ec0424043d15835a1801a723d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722275522884930368,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dee56b25-3f87-483c-8fda-95989162e3ba,},Annotations:map[string]string{io.ku
bernetes.container.hash: cdf99c0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b655e5789fb522e6dae942b61ff483971a837d40607e3530bc9c1ae524e627e1,PodSandboxId:356e8fa20bee0469a0aa2c1a1c427a032fea595ab55791571b243d5dd1895e79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722275518023760140,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91128e09c1b43339a5edc267d8a2607c,},Annotations:map[string]string{io.kubernetes.container.hash: 93b97758,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619d14875058f9bdafffdb8f819f0dbada1c276a2b0c1a22286f2a986be363bf,PodSandboxId:8716f9d02e68514d22182809651ba202d94a52d2e187db3894ee2c47c4a3282c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722275517994184016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74794ee3688afe14ba4fbb763c9f1f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3c8ef53e88607de4cee229954a3b64b2c31ca195b03e2e83e6b390b674f06a,PodSandboxId:e574e4d95b35c760042756833f92daa0a5169e957cf29184fedb43ab6fedaa71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722275517930270247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfb26bf40408c8779988df3a1b3dbe66,},Annotations:map[string]string{io.kubernetes.container.hash: 76ddb303,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f75287ad92d6d5f2e5b7da85de7858362a02e33879b2c77184aafed885e2e0d,PodSandboxId:72e988dc049fa83ebf7b5971055dadbc6fe84fe29f0c3ee4d33c920aa33d0ef4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722275517908930153,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 987b84de9f64c76a1f7b604c11dc5ffd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f87e33e730abde22af3b74af0ab2bd17918944204a4b158207fa583b003b10b9,PodSandboxId:37d9e64592cd7aaa1505dcddccbc5dd067152f471067fd6873fddcbaf957738a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722275199589060401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kqrzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c31cd36-a917-4a07-a18f-887c7defa6e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1799863e,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2322d2050e81876dd49cf2144c141ed8528c2c549e7dd61e0c48e22f29649330,PodSandboxId:ee8980b3e7f2b194bf74c3732dd35f28032437e5a232010aa7d6c37542186709,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275144901037514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7fmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbbeed00-0740-41dc-b9f2-aa03336074ac,},Annotations:map[string]string{io.kubernetes.container.hash: be4c111a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7416acdd88a7db531e03ee5767e76470f46b78db5c45ef842821db5036503517,PodSandboxId:444cd6c8b4e011244c073f7937a14b413182cb3cea8fc4ffafab8ea2fe27b8d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722275144858336451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: dee56b25-3f87-483c-8fda-95989162e3ba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf99c0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7615041ffc1aef8098e347b0f7f11240291b7814bf03e59b1e336bc7d7bb7c0,PodSandboxId:fd87ad0c3835a9427b7071df83e4698fd7627224603c974a5578338a3878b88c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722275133246875325,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-68dnv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 700c5f4f-8bac-4a69-8174-0b8a80c4e831,},Annotations:map[string]string{io.kubernetes.container.hash: ef2bce6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864297549b1272800bfebdd28175a349b3ee8ef7c7bbd78c771eaad9e02b25cc,PodSandboxId:df1d9d917d4d21947539533f40d12eb1b40d87fe3253db110c799825ab064153,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722275131100162272,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shhsx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8951fee7-e31c-401a-8688-79487ea5fc64,},Annotations:map[string]string{io.kubernetes.container.hash: 5dbfc197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82eb1db29cc5f5d61344dd7ed6985093a7f202f4cdac7abf12ba859cab24ac6,PodSandboxId:fd621ebddac51317684d0de146954e509296749a3949dffd5d6da406fa9f7efd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722275111299056055,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
74794ee3688afe14ba4fbb763c9f1f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7844975c2969533b47b744772eb44171fe78572163dc172999a28d44fcf4ee,PodSandboxId:b63a4e8c712bf07ba90c52a91a51b4a32c0505aab36ca15f5a09f7d3a15117b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722275111235776829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91128e09c1b43339a5edc267d8a2607c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 93b97758,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07fee3a17c566e898bf4bda366cd3fef0865591a42bdfbf1d81036e598ac14ce,PodSandboxId:d573150cedc5641a9fb0a3a4cfb625233a2f4626ead42fee1318995d61551222,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722275111245721916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfb26bf40408c8779988df3a1b3dbe66,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 76ddb303,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f624d4b42189dfe667a9aad521e37765c1b61fa5a1300f05f7a937db2c6a6fa,PodSandboxId:da889349b706f3293f3f67b781023c77afe739743aceecb64997b2e77d5d49a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722275111216522794,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-602258,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 987b84de9f64c76a1f7b604c11dc5ffd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87113999-21b1-459a-9cc3-9ddea82a287e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	94d253c5fc168       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   ecba257d3f2a1       busybox-fc5497c4f-kqrzf
	3426a69c32cd2       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   3cda1a27a1fe3       kindnet-68dnv
	9a7ed335c2808       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   b44765ca91bab       coredns-7db6d8ff4d-b7fmn
	bf1a3774da5a1       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   cb284998c1c34       kube-proxy-shhsx
	171b21ffde479       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   73fc019512997       storage-provisioner
	b655e5789fb52       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   356e8fa20bee0       etcd-multinode-602258
	619d14875058f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   8716f9d02e685       kube-scheduler-multinode-602258
	fd3c8ef53e886       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   e574e4d95b35c       kube-apiserver-multinode-602258
	7f75287ad92d6       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   72e988dc049fa       kube-controller-manager-multinode-602258
	f87e33e730abd       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   37d9e64592cd7       busybox-fc5497c4f-kqrzf
	2322d2050e818       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   ee8980b3e7f2b       coredns-7db6d8ff4d-b7fmn
	7416acdd88a7d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   444cd6c8b4e01       storage-provisioner
	d7615041ffc1a       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   fd87ad0c3835a       kindnet-68dnv
	864297549b127       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   df1d9d917d4d2       kube-proxy-shhsx
	e82eb1db29cc5       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      10 minutes ago      Exited              kube-scheduler            0                   fd621ebddac51       kube-scheduler-multinode-602258
	07fee3a17c566       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      10 minutes ago      Exited              kube-apiserver            0                   d573150cedc56       kube-apiserver-multinode-602258
	6e7844975c296       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   b63a4e8c712bf       etcd-multinode-602258
	1f624d4b42189       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      10 minutes ago      Exited              kube-controller-manager   0                   da889349b706f       kube-controller-manager-multinode-602258
	
	
	==> coredns [2322d2050e81876dd49cf2144c141ed8528c2c549e7dd61e0c48e22f29649330] <==
	[INFO] 10.244.1.2:35429 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001692093s
	[INFO] 10.244.1.2:37285 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009108s
	[INFO] 10.244.1.2:56622 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127112s
	[INFO] 10.244.1.2:45288 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001042633s
	[INFO] 10.244.1.2:52803 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059381s
	[INFO] 10.244.1.2:54071 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130426s
	[INFO] 10.244.1.2:39702 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006405s
	[INFO] 10.244.0.3:50417 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077697s
	[INFO] 10.244.0.3:50628 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088326s
	[INFO] 10.244.0.3:58676 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000035996s
	[INFO] 10.244.0.3:52001 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000030324s
	[INFO] 10.244.1.2:56675 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123978s
	[INFO] 10.244.1.2:43659 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119779s
	[INFO] 10.244.1.2:52711 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075917s
	[INFO] 10.244.1.2:45351 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067272s
	[INFO] 10.244.0.3:52683 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076616s
	[INFO] 10.244.0.3:38420 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082376s
	[INFO] 10.244.0.3:44768 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000066627s
	[INFO] 10.244.0.3:33241 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000051007s
	[INFO] 10.244.1.2:57945 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000257298s
	[INFO] 10.244.1.2:50244 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108704s
	[INFO] 10.244.1.2:44884 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082904s
	[INFO] 10.244.1.2:44311 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088912s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9a7ed335c280800a15790e3d10b98a97080fe5a90197bbc732c626cfdd89f67a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45425 - 15121 "HINFO IN 1518611092228989175.7819485505034679445. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025883017s
	
	
	==> describe nodes <==
	Name:               multinode-602258
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-602258
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=multinode-602258
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T17_45_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:45:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-602258
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:55:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:52:01 +0000   Mon, 29 Jul 2024 17:45:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:52:01 +0000   Mon, 29 Jul 2024 17:45:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:52:01 +0000   Mon, 29 Jul 2024 17:45:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:52:01 +0000   Mon, 29 Jul 2024 17:45:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.218
	  Hostname:    multinode-602258
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 012498f1d60e4288b1f7a7707dd783e7
	  System UUID:                012498f1-d60e-4288-b1f7-a7707dd783e7
	  Boot ID:                    a03477c3-feed-4e08-9160-365794e87044
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kqrzf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	  kube-system                 coredns-7db6d8ff4d-b7fmn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-602258                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-68dnv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-602258             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-602258    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-shhsx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-602258             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-602258 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-602258 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-602258 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-602258 event: Registered Node multinode-602258 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-602258 status is now: NodeReady
	  Normal  Starting                 4m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node multinode-602258 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node multinode-602258 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node multinode-602258 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m49s                node-controller  Node multinode-602258 event: Registered Node multinode-602258 in Controller
	
	
	Name:               multinode-602258-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-602258-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=multinode-602258
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_52_42_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:52:42 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-602258-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:53:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 17:53:12 +0000   Mon, 29 Jul 2024 17:54:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 17:53:12 +0000   Mon, 29 Jul 2024 17:54:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 17:53:12 +0000   Mon, 29 Jul 2024 17:54:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 17:53:12 +0000   Mon, 29 Jul 2024 17:54:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    multinode-602258-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9519401b26cd460a900a42ed1c507ef4
	  System UUID:                9519401b-26cd-460a-900a-42ed1c507ef4
	  Boot ID:                    c6e79f73-c70a-4d9c-b975-69ed661d4cf1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v7xwc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 kindnet-cb54x              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m47s
	  kube-system                 kube-proxy-vknqb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m40s                  kube-proxy       
	  Normal  Starting                 3m17s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    9m47s (x2 over 9m47s)  kubelet          Node multinode-602258-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m47s (x2 over 9m47s)  kubelet          Node multinode-602258-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m47s (x2 over 9m47s)  kubelet          Node multinode-602258-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m47s                  kubelet          Starting kubelet.
	  Normal  NodeReady                9m28s                  kubelet          Node multinode-602258-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m22s (x2 over 3m22s)  kubelet          Node multinode-602258-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m22s (x2 over 3m22s)  kubelet          Node multinode-602258-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m22s (x2 over 3m22s)  kubelet          Node multinode-602258-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m4s                   kubelet          Node multinode-602258-m02 status is now: NodeReady
	  Normal  NodeNotReady             100s                   node-controller  Node multinode-602258-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.057526] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065920] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.180315] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.146008] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.277522] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.069706] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +3.920201] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.061457] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.502183] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[  +0.075359] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.454310] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.127586] systemd-fstab-generator[1481]: Ignoring "noauto" option for root device
	[ +13.936945] kauditd_printk_skb: 60 callbacks suppressed
	[Jul29 17:46] kauditd_printk_skb: 12 callbacks suppressed
	[Jul29 17:51] systemd-fstab-generator[2802]: Ignoring "noauto" option for root device
	[  +0.138219] systemd-fstab-generator[2814]: Ignoring "noauto" option for root device
	[  +0.177249] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[  +0.158200] systemd-fstab-generator[2840]: Ignoring "noauto" option for root device
	[  +0.286282] systemd-fstab-generator[2868]: Ignoring "noauto" option for root device
	[  +0.729519] systemd-fstab-generator[2967]: Ignoring "noauto" option for root device
	[  +2.077819] systemd-fstab-generator[3090]: Ignoring "noauto" option for root device
	[Jul29 17:52] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.963818] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.964547] systemd-fstab-generator[3924]: Ignoring "noauto" option for root device
	[ +18.739675] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [6e7844975c2969533b47b744772eb44171fe78572163dc172999a28d44fcf4ee] <==
	{"level":"info","ts":"2024-07-29T17:45:11.765285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5f6aca4c72f5b22 elected leader e5f6aca4c72f5b22 at term 2"}
	{"level":"info","ts":"2024-07-29T17:45:11.769674Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:45:11.774625Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e5f6aca4c72f5b22","local-member-attributes":"{Name:multinode-602258 ClientURLs:[https://192.168.39.218:2379]}","request-path":"/0/members/e5f6aca4c72f5b22/attributes","cluster-id":"2483a61a4a74c1c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T17:45:11.774836Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2483a61a4a74c1c4","local-member-id":"e5f6aca4c72f5b22","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:45:11.775053Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:45:11.77509Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:45:11.775152Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T17:45:11.775744Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T17:45:11.780341Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T17:45:11.780392Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T17:45:11.786821Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.218:2379"}
	{"level":"info","ts":"2024-07-29T17:45:11.789766Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T17:46:17.587416Z","caller":"traceutil/trace.go:171","msg":"trace[1313146405] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"142.700051ms","start":"2024-07-29T17:46:17.444686Z","end":"2024-07-29T17:46:17.587386Z","steps":["trace[1313146405] 'process raft request'  (duration: 134.756885ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:47:12.021342Z","caller":"traceutil/trace.go:171","msg":"trace[281310603] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"164.064708ms","start":"2024-07-29T17:47:11.857143Z","end":"2024-07-29T17:47:12.021208Z","steps":["trace[281310603] 'process raft request'  (duration: 101.977312ms)","trace[281310603] 'compare'  (duration: 61.990741ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T17:47:12.021416Z","caller":"traceutil/trace.go:171","msg":"trace[1350048901] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"164.12933ms","start":"2024-07-29T17:47:11.857266Z","end":"2024-07-29T17:47:12.021396Z","steps":["trace[1350048901] 'process raft request'  (duration: 163.921031ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:50:22.075852Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T17:50:22.075972Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-602258","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.218:2380"],"advertise-client-urls":["https://192.168.39.218:2379"]}
	{"level":"warn","ts":"2024-07-29T17:50:22.076057Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T17:50:22.076133Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T17:50:22.163937Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.218:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T17:50:22.16399Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.218:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T17:50:22.164053Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e5f6aca4c72f5b22","current-leader-member-id":"e5f6aca4c72f5b22"}
	{"level":"info","ts":"2024-07-29T17:50:22.16658Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.218:2380"}
	{"level":"info","ts":"2024-07-29T17:50:22.166762Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.218:2380"}
	{"level":"info","ts":"2024-07-29T17:50:22.1668Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-602258","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.218:2380"],"advertise-client-urls":["https://192.168.39.218:2379"]}
	
	
	==> etcd [b655e5789fb522e6dae942b61ff483971a837d40607e3530bc9c1ae524e627e1] <==
	{"level":"info","ts":"2024-07-29T17:51:58.324975Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T17:51:58.326084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 switched to configuration voters=(16570621702672702242)"}
	{"level":"info","ts":"2024-07-29T17:51:58.326253Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2483a61a4a74c1c4","local-member-id":"e5f6aca4c72f5b22","added-peer-id":"e5f6aca4c72f5b22","added-peer-peer-urls":["https://192.168.39.218:2380"]}
	{"level":"info","ts":"2024-07-29T17:51:58.326493Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2483a61a4a74c1c4","local-member-id":"e5f6aca4c72f5b22","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:51:58.326569Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:51:58.330481Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T17:51:58.33073Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e5f6aca4c72f5b22","initial-advertise-peer-urls":["https://192.168.39.218:2380"],"listen-peer-urls":["https://192.168.39.218:2380"],"advertise-client-urls":["https://192.168.39.218:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.218:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T17:51:58.330845Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T17:51:58.330951Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.218:2380"}
	{"level":"info","ts":"2024-07-29T17:51:58.330979Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.218:2380"}
	{"level":"info","ts":"2024-07-29T17:52:00.207175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T17:52:00.207281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T17:52:00.207335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 received MsgPreVoteResp from e5f6aca4c72f5b22 at term 2"}
	{"level":"info","ts":"2024-07-29T17:52:00.207347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T17:52:00.207352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 received MsgVoteResp from e5f6aca4c72f5b22 at term 3"}
	{"level":"info","ts":"2024-07-29T17:52:00.20736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T17:52:00.20737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5f6aca4c72f5b22 elected leader e5f6aca4c72f5b22 at term 3"}
	{"level":"info","ts":"2024-07-29T17:52:00.212104Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e5f6aca4c72f5b22","local-member-attributes":"{Name:multinode-602258 ClientURLs:[https://192.168.39.218:2379]}","request-path":"/0/members/e5f6aca4c72f5b22/attributes","cluster-id":"2483a61a4a74c1c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T17:52:00.212351Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T17:52:00.212114Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T17:52:00.212845Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T17:52:00.213609Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T17:52:00.214639Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T17:52:00.215096Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.218:2379"}
	{"level":"info","ts":"2024-07-29T17:53:23.610382Z","caller":"traceutil/trace.go:171","msg":"trace[1060692683] transaction","detail":"{read_only:false; response_revision:1125; number_of_response:1; }","duration":"163.159234ms","start":"2024-07-29T17:53:23.447191Z","end":"2024-07-29T17:53:23.61035Z","steps":["trace[1060692683] 'process raft request'  (duration: 162.783226ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:56:04 up 11 min,  0 users,  load average: 0.07, 0.27, 0.17
	Linux multinode-602258 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3426a69c32cd243e491fea11995a1bf631263c68a605d31e6a21b97ce4d0ac4b] <==
	I0729 17:54:54.269923       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:55:04.269500       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:55:04.269665       1 main.go:299] handling current node
	I0729 17:55:04.269698       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:55:04.269719       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:55:14.271378       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:55:14.271472       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:55:14.271637       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:55:14.271663       1 main.go:299] handling current node
	I0729 17:55:24.273100       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:55:24.273190       1 main.go:299] handling current node
	I0729 17:55:24.273214       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:55:24.273257       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:55:34.270316       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:55:34.270343       1 main.go:299] handling current node
	I0729 17:55:34.270358       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:55:34.270363       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:55:44.278379       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:55:44.278441       1 main.go:299] handling current node
	I0729 17:55:44.278465       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:55:44.278473       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:55:54.278785       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:55:54.278842       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:55:54.278992       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:55:54.278999       1 main.go:299] handling current node
	
	
	==> kindnet [d7615041ffc1aef8098e347b0f7f11240291b7814bf03e59b1e336bc7d7bb7c0] <==
	I0729 17:49:34.280581       1 main.go:322] Node multinode-602258-m03 has CIDR [10.244.3.0/24] 
	I0729 17:49:44.283978       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:49:44.284086       1 main.go:299] handling current node
	I0729 17:49:44.284116       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:49:44.284134       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:49:44.284356       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0729 17:49:44.284390       1 main.go:322] Node multinode-602258-m03 has CIDR [10.244.3.0/24] 
	I0729 17:49:54.288459       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:49:54.288518       1 main.go:299] handling current node
	I0729 17:49:54.288540       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:49:54.288545       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:49:54.288684       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0729 17:49:54.288690       1 main.go:322] Node multinode-602258-m03 has CIDR [10.244.3.0/24] 
	I0729 17:50:04.286785       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:50:04.286964       1 main.go:299] handling current node
	I0729 17:50:04.287027       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:50:04.287050       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:50:04.287216       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0729 17:50:04.287319       1 main.go:322] Node multinode-602258-m03 has CIDR [10.244.3.0/24] 
	I0729 17:50:14.285710       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0729 17:50:14.285861       1 main.go:299] handling current node
	I0729 17:50:14.285900       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0729 17:50:14.285919       1 main.go:322] Node multinode-602258-m02 has CIDR [10.244.1.0/24] 
	I0729 17:50:14.286071       1 main.go:295] Handling node with IPs: map[192.168.39.21:{}]
	I0729 17:50:14.286092       1 main.go:322] Node multinode-602258-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [07fee3a17c566e898bf4bda366cd3fef0865591a42bdfbf1d81036e598ac14ce] <==
	I0729 17:50:22.066937       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0729 17:50:22.096947       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0729 17:50:22.097679       1 logging.go:59] [core] [Channel #13 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.097833       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.097863       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.097967       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098000       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098047       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098072       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098117       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098150       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098184       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098211       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098285       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098489       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098597       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098658       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0729 17:50:22.098771       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0729 17:50:22.098850       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098909       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.098961       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.099008       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.099074       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.099153       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 17:50:22.100376       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [fd3c8ef53e88607de4cee229954a3b64b2c31ca195b03e2e83e6b390b674f06a] <==
	I0729 17:52:01.472282       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0729 17:52:01.519334       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 17:52:01.519426       1 policy_source.go:224] refreshing policies
	I0729 17:52:01.534813       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 17:52:01.546872       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 17:52:01.552873       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 17:52:01.553837       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 17:52:01.553889       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 17:52:01.553896       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 17:52:01.558176       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 17:52:01.558684       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 17:52:01.558763       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 17:52:01.572668       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 17:52:01.572739       1 aggregator.go:165] initial CRD sync complete...
	I0729 17:52:01.572760       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 17:52:01.572765       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 17:52:01.572771       1 cache.go:39] Caches are synced for autoregister controller
	I0729 17:52:02.446785       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 17:52:03.906574       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 17:52:04.053594       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 17:52:04.072003       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 17:52:04.167693       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 17:52:04.177264       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 17:52:14.731964       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 17:52:14.760362       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1f624d4b42189dfe667a9aad521e37765c1b61fa5a1300f05f7a937db2c6a6fa] <==
	I0729 17:46:17.590629       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-602258-m02\" does not exist"
	I0729 17:46:17.671217       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-602258-m02" podCIDRs=["10.244.1.0/24"]
	I0729 17:46:19.592766       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-602258-m02"
	I0729 17:46:36.043726       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:46:38.262402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.355425ms"
	I0729 17:46:38.277775       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.310855ms"
	I0729 17:46:38.277933       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.063µs"
	I0729 17:46:38.281762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.371µs"
	I0729 17:46:39.744791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.769785ms"
	I0729 17:46:39.744869       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.218µs"
	I0729 17:46:39.850428       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.63819ms"
	I0729 17:46:39.850502       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.596µs"
	I0729 17:47:12.025508       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-602258-m03\" does not exist"
	I0729 17:47:12.025754       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:47:12.038996       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-602258-m03" podCIDRs=["10.244.2.0/24"]
	I0729 17:47:14.627810       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-602258-m03"
	I0729 17:47:29.360621       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:47:57.833943       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:47:59.193170       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:47:59.195495       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-602258-m03\" does not exist"
	I0729 17:47:59.204572       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-602258-m03" podCIDRs=["10.244.3.0/24"]
	I0729 17:48:16.684716       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:49:04.690657       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m03"
	I0729 17:49:04.743495       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.131727ms"
	I0729 17:49:04.743733       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.573µs"
	
	
	==> kube-controller-manager [7f75287ad92d6d5f2e5b7da85de7858362a02e33879b2c77184aafed885e2e0d] <==
	I0729 17:52:42.201771       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-602258-m02\" does not exist"
	I0729 17:52:42.211553       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-602258-m02" podCIDRs=["10.244.1.0/24"]
	I0729 17:52:44.084395       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.681µs"
	I0729 17:52:44.095065       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.634µs"
	I0729 17:52:44.107997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.261µs"
	I0729 17:52:44.140771       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.746µs"
	I0729 17:52:44.148467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.742µs"
	I0729 17:52:44.153124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.031µs"
	I0729 17:53:00.089400       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:53:00.109945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.021µs"
	I0729 17:53:00.130031       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.957µs"
	I0729 17:53:02.501625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.765055ms"
	I0729 17:53:02.502363       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.13µs"
	I0729 17:53:18.312089       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:53:19.369190       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-602258-m03\" does not exist"
	I0729 17:53:19.369301       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:53:19.379564       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-602258-m03" podCIDRs=["10.244.2.0/24"]
	I0729 17:53:37.103664       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m03"
	I0729 17:53:42.373921       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-602258-m02"
	I0729 17:54:24.857793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.414537ms"
	I0729 17:54:24.861030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.281µs"
	I0729 17:54:34.680338       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-jw9gn"
	I0729 17:54:34.705820       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-jw9gn"
	I0729 17:54:34.705929       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-5txpb"
	I0729 17:54:34.731307       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-5txpb"
	
	
	==> kube-proxy [864297549b1272800bfebdd28175a349b3ee8ef7c7bbd78c771eaad9e02b25cc] <==
	I0729 17:45:31.626181       1 server_linux.go:69] "Using iptables proxy"
	I0729 17:45:31.692323       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.218"]
	I0729 17:45:31.757481       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 17:45:31.757522       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 17:45:31.757537       1 server_linux.go:165] "Using iptables Proxier"
	I0729 17:45:31.760208       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 17:45:31.760523       1 server.go:872] "Version info" version="v1.30.3"
	I0729 17:45:31.760536       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:45:31.762523       1 config.go:192] "Starting service config controller"
	I0729 17:45:31.762696       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 17:45:31.762726       1 config.go:101] "Starting endpoint slice config controller"
	I0729 17:45:31.762731       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 17:45:31.763651       1 config.go:319] "Starting node config controller"
	I0729 17:45:31.763659       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 17:45:31.864181       1 shared_informer.go:320] Caches are synced for node config
	I0729 17:45:31.864212       1 shared_informer.go:320] Caches are synced for service config
	I0729 17:45:31.864306       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [bf1a3774da5a1b300885fa869b6ee486244da9331c7388294b88d4cc568c1065] <==
	I0729 17:52:03.469289       1 server_linux.go:69] "Using iptables proxy"
	I0729 17:52:03.516071       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.218"]
	I0729 17:52:03.658433       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 17:52:03.658497       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 17:52:03.658515       1 server_linux.go:165] "Using iptables Proxier"
	I0729 17:52:03.665387       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 17:52:03.667578       1 server.go:872] "Version info" version="v1.30.3"
	I0729 17:52:03.667840       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:52:03.669393       1 config.go:192] "Starting service config controller"
	I0729 17:52:03.675359       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 17:52:03.670320       1 config.go:101] "Starting endpoint slice config controller"
	I0729 17:52:03.675487       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 17:52:03.670961       1 config.go:319] "Starting node config controller"
	I0729 17:52:03.675497       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 17:52:03.779653       1 shared_informer.go:320] Caches are synced for node config
	I0729 17:52:03.779685       1 shared_informer.go:320] Caches are synced for service config
	I0729 17:52:03.779726       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [619d14875058f9bdafffdb8f819f0dbada1c276a2b0c1a22286f2a986be363bf] <==
	I0729 17:51:59.296949       1 serving.go:380] Generated self-signed cert in-memory
	W0729 17:52:01.489846       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 17:52:01.490323       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 17:52:01.490379       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 17:52:01.490404       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 17:52:01.535513       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 17:52:01.536072       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:52:01.543056       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 17:52:01.543322       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 17:52:01.545531       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 17:52:01.543464       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 17:52:01.646190       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e82eb1db29cc5f5d61344dd7ed6985093a7f202f4cdac7abf12ba859cab24ac6] <==
	W0729 17:45:14.022069       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 17:45:14.022120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 17:45:14.022184       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 17:45:14.022215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 17:45:14.899560       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 17:45:14.899649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 17:45:14.915272       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 17:45:14.915313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 17:45:15.005943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 17:45:15.006574       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 17:45:15.093493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 17:45:15.093627       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 17:45:15.136203       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 17:45:15.136368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 17:45:15.186504       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 17:45:15.186680       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 17:45:15.273476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 17:45:15.273555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 17:45:15.342447       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 17:45:15.342494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0729 17:45:17.515381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 17:50:22.071352       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 17:50:22.071510       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0729 17:50:22.071949       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0729 17:50:22.081281       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.321813    3097 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/700c5f4f-8bac-4a69-8174-0b8a80c4e831-cni-cfg\") pod \"kindnet-68dnv\" (UID: \"700c5f4f-8bac-4a69-8174-0b8a80c4e831\") " pod="kube-system/kindnet-68dnv"
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.321847    3097 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/700c5f4f-8bac-4a69-8174-0b8a80c4e831-xtables-lock\") pod \"kindnet-68dnv\" (UID: \"700c5f4f-8bac-4a69-8174-0b8a80c4e831\") " pod="kube-system/kindnet-68dnv"
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.321928    3097 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8951fee7-e31c-401a-8688-79487ea5fc64-xtables-lock\") pod \"kube-proxy-shhsx\" (UID: \"8951fee7-e31c-401a-8688-79487ea5fc64\") " pod="kube-system/kube-proxy-shhsx"
	Jul 29 17:52:02 multinode-602258 kubelet[3097]: I0729 17:52:02.322019    3097 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8951fee7-e31c-401a-8688-79487ea5fc64-lib-modules\") pod \"kube-proxy-shhsx\" (UID: \"8951fee7-e31c-401a-8688-79487ea5fc64\") " pod="kube-system/kube-proxy-shhsx"
	Jul 29 17:52:04 multinode-602258 kubelet[3097]: I0729 17:52:04.629552    3097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 29 17:52:57 multinode-602258 kubelet[3097]: E0729 17:52:57.307914    3097 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:52:57 multinode-602258 kubelet[3097]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:52:57 multinode-602258 kubelet[3097]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:52:57 multinode-602258 kubelet[3097]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:52:57 multinode-602258 kubelet[3097]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:53:57 multinode-602258 kubelet[3097]: E0729 17:53:57.308424    3097 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:53:57 multinode-602258 kubelet[3097]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:53:57 multinode-602258 kubelet[3097]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:53:57 multinode-602258 kubelet[3097]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:53:57 multinode-602258 kubelet[3097]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:54:57 multinode-602258 kubelet[3097]: E0729 17:54:57.307708    3097 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:54:57 multinode-602258 kubelet[3097]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:54:57 multinode-602258 kubelet[3097]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:54:57 multinode-602258 kubelet[3097]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:54:57 multinode-602258 kubelet[3097]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:55:57 multinode-602258 kubelet[3097]: E0729 17:55:57.308326    3097 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:55:57 multinode-602258 kubelet[3097]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:55:57 multinode-602258 kubelet[3097]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:55:57 multinode-602258 kubelet[3097]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:55:57 multinode-602258 kubelet[3097]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 17:56:03.264159   50128 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19345-11206/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-602258 -n multinode-602258
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-602258 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.37s)

                                                
                                    
x
+
TestPreload (269.17s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-877432 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0729 18:01:52.902395   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-877432 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m8.148654179s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-877432 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-877432 image pull gcr.io/k8s-minikube/busybox: (1.068545594s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-877432
E0729 18:03:29.676951   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-877432: exit status 82 (2m0.445875071s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-877432"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-877432 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-07-29 18:04:04.808315872 +0000 UTC m=+4101.475684163
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-877432 -n test-preload-877432
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-877432 -n test-preload-877432: exit status 3 (18.605546296s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:04:23.410716   53053 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	E0729 18:04:23.410735   53053 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-877432" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-877432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-877432
--- FAIL: TestPreload (269.17s)

                                                
                                    
x
+
TestKubernetesUpgrade (404s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-372591 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-372591 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m29.00084848s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-372591] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19345
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-372591" primary control-plane node in "kubernetes-upgrade-372591" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:06:20.386109   54143 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:06:20.386403   54143 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:06:20.386415   54143 out.go:304] Setting ErrFile to fd 2...
	I0729 18:06:20.386421   54143 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:06:20.386610   54143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:06:20.387175   54143 out.go:298] Setting JSON to false
	I0729 18:06:20.388081   54143 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6532,"bootTime":1722269848,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:06:20.388142   54143 start.go:139] virtualization: kvm guest
	I0729 18:06:20.391226   54143 out.go:177] * [kubernetes-upgrade-372591] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:06:20.392685   54143 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 18:06:20.392722   54143 notify.go:220] Checking for updates...
	I0729 18:06:20.395111   54143 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:06:20.396581   54143 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:06:20.398668   54143 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:06:20.399694   54143 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:06:20.400811   54143 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:06:20.401989   54143 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:06:20.438483   54143 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 18:06:20.439708   54143 start.go:297] selected driver: kvm2
	I0729 18:06:20.439724   54143 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:06:20.439738   54143 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:06:20.440534   54143 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:06:20.440642   54143 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:06:20.464924   54143 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:06:20.464977   54143 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 18:06:20.465246   54143 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 18:06:20.465279   54143 cni.go:84] Creating CNI manager for ""
	I0729 18:06:20.465288   54143 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:06:20.465300   54143 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 18:06:20.465374   54143 start.go:340] cluster config:
	{Name:kubernetes-upgrade-372591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-372591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:06:20.465493   54143 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:06:20.467478   54143 out.go:177] * Starting "kubernetes-upgrade-372591" primary control-plane node in "kubernetes-upgrade-372591" cluster
	I0729 18:06:20.468733   54143 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:06:20.468767   54143 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:06:20.468780   54143 cache.go:56] Caching tarball of preloaded images
	I0729 18:06:20.468856   54143 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:06:20.468869   54143 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 18:06:20.469229   54143 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/config.json ...
	I0729 18:06:20.469258   54143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/config.json: {Name:mkdf5bec355c170b05081c7d0031303cca385db9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:06:20.469397   54143 start.go:360] acquireMachinesLock for kubernetes-upgrade-372591: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:06:20.469433   54143 start.go:364] duration metric: took 17.202µs to acquireMachinesLock for "kubernetes-upgrade-372591"
	I0729 18:06:20.469452   54143 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-372591 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-372591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:06:20.469500   54143 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 18:06:20.471784   54143 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 18:06:20.471951   54143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:06:20.471985   54143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:06:20.494924   54143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36703
	I0729 18:06:20.495333   54143 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:06:20.495913   54143 main.go:141] libmachine: Using API Version  1
	I0729 18:06:20.495931   54143 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:06:20.496232   54143 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:06:20.496436   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetMachineName
	I0729 18:06:20.496736   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .DriverName
	I0729 18:06:20.496885   54143 start.go:159] libmachine.API.Create for "kubernetes-upgrade-372591" (driver="kvm2")
	I0729 18:06:20.496912   54143 client.go:168] LocalClient.Create starting
	I0729 18:06:20.496934   54143 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem
	I0729 18:06:20.496969   54143 main.go:141] libmachine: Decoding PEM data...
	I0729 18:06:20.496982   54143 main.go:141] libmachine: Parsing certificate...
	I0729 18:06:20.497028   54143 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem
	I0729 18:06:20.497047   54143 main.go:141] libmachine: Decoding PEM data...
	I0729 18:06:20.497064   54143 main.go:141] libmachine: Parsing certificate...
	I0729 18:06:20.497080   54143 main.go:141] libmachine: Running pre-create checks...
	I0729 18:06:20.497088   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .PreCreateCheck
	I0729 18:06:20.497434   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetConfigRaw
	I0729 18:06:20.497843   54143 main.go:141] libmachine: Creating machine...
	I0729 18:06:20.497859   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .Create
	I0729 18:06:20.497997   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Creating KVM machine...
	I0729 18:06:20.499250   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found existing default KVM network
	I0729 18:06:20.499865   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:20.499731   54207 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014720}
	I0729 18:06:20.499900   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | created network xml: 
	I0729 18:06:20.499922   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | <network>
	I0729 18:06:20.499937   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG |   <name>mk-kubernetes-upgrade-372591</name>
	I0729 18:06:20.499951   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG |   <dns enable='no'/>
	I0729 18:06:20.499962   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG |   
	I0729 18:06:20.499977   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 18:06:20.499988   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG |     <dhcp>
	I0729 18:06:20.499997   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 18:06:20.500008   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG |     </dhcp>
	I0729 18:06:20.500018   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG |   </ip>
	I0729 18:06:20.500028   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG |   
	I0729 18:06:20.500034   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | </network>
	I0729 18:06:20.500045   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | 
	I0729 18:06:20.505311   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | trying to create private KVM network mk-kubernetes-upgrade-372591 192.168.39.0/24...
	I0729 18:06:20.580373   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | private KVM network mk-kubernetes-upgrade-372591 192.168.39.0/24 created
	I0729 18:06:20.580400   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Setting up store path in /home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591 ...
	I0729 18:06:20.580412   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:20.580361   54207 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:06:20.580430   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Building disk image from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 18:06:20.580523   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Downloading /home/jenkins/minikube-integration/19345-11206/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 18:06:20.903433   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:20.903317   54207 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591/id_rsa...
	I0729 18:06:21.145613   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:21.145484   54207 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591/kubernetes-upgrade-372591.rawdisk...
	I0729 18:06:21.145647   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | Writing magic tar header
	I0729 18:06:21.145704   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | Writing SSH key tar header
	I0729 18:06:21.145738   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:21.145590   54207 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591 ...
	I0729 18:06:21.145758   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591 (perms=drwx------)
	I0729 18:06:21.145776   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591
	I0729 18:06:21.145797   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines
	I0729 18:06:21.145813   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:06:21.145828   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:06:21.145843   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube (perms=drwxr-xr-x)
	I0729 18:06:21.145858   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206
	I0729 18:06:21.145875   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:06:21.145888   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:06:21.145900   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | Checking permissions on dir: /home
	I0729 18:06:21.145912   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | Skipping /home - not owner
	I0729 18:06:21.145927   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206 (perms=drwxrwxr-x)
	I0729 18:06:21.145943   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:06:21.145967   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:06:21.145994   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Creating domain...
	I0729 18:06:21.147022   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) define libvirt domain using xml: 
	I0729 18:06:21.147043   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) <domain type='kvm'>
	I0729 18:06:21.147054   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)   <name>kubernetes-upgrade-372591</name>
	I0729 18:06:21.147072   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)   <memory unit='MiB'>2200</memory>
	I0729 18:06:21.147085   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)   <vcpu>2</vcpu>
	I0729 18:06:21.147096   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)   <features>
	I0729 18:06:21.147110   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     <acpi/>
	I0729 18:06:21.147123   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     <apic/>
	I0729 18:06:21.147213   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     <pae/>
	I0729 18:06:21.147270   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     
	I0729 18:06:21.147285   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)   </features>
	I0729 18:06:21.147298   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)   <cpu mode='host-passthrough'>
	I0729 18:06:21.147309   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)   
	I0729 18:06:21.147318   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)   </cpu>
	I0729 18:06:21.147326   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)   <os>
	I0729 18:06:21.147336   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     <type>hvm</type>
	I0729 18:06:21.147346   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     <boot dev='cdrom'/>
	I0729 18:06:21.147357   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     <boot dev='hd'/>
	I0729 18:06:21.147366   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     <bootmenu enable='no'/>
	I0729 18:06:21.147376   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)   </os>
	I0729 18:06:21.147386   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)   <devices>
	I0729 18:06:21.147400   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     <disk type='file' device='cdrom'>
	I0729 18:06:21.147425   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591/boot2docker.iso'/>
	I0729 18:06:21.147443   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)       <target dev='hdc' bus='scsi'/>
	I0729 18:06:21.147454   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)       <readonly/>
	I0729 18:06:21.147465   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     </disk>
	I0729 18:06:21.147474   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     <disk type='file' device='disk'>
	I0729 18:06:21.147494   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:06:21.147511   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591/kubernetes-upgrade-372591.rawdisk'/>
	I0729 18:06:21.147527   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)       <target dev='hda' bus='virtio'/>
	I0729 18:06:21.147539   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     </disk>
	I0729 18:06:21.147550   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     <interface type='network'>
	I0729 18:06:21.147564   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)       <source network='mk-kubernetes-upgrade-372591'/>
	I0729 18:06:21.147573   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)       <model type='virtio'/>
	I0729 18:06:21.147584   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     </interface>
	I0729 18:06:21.147597   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     <interface type='network'>
	I0729 18:06:21.147610   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)       <source network='default'/>
	I0729 18:06:21.147621   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)       <model type='virtio'/>
	I0729 18:06:21.147631   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     </interface>
	I0729 18:06:21.147642   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     <serial type='pty'>
	I0729 18:06:21.147653   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)       <target port='0'/>
	I0729 18:06:21.147661   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     </serial>
	I0729 18:06:21.147678   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     <console type='pty'>
	I0729 18:06:21.147697   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)       <target type='serial' port='0'/>
	I0729 18:06:21.147708   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     </console>
	I0729 18:06:21.147719   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     <rng model='virtio'>
	I0729 18:06:21.147729   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)       <backend model='random'>/dev/random</backend>
	I0729 18:06:21.147738   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     </rng>
	I0729 18:06:21.147743   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     
	I0729 18:06:21.147751   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)     
	I0729 18:06:21.147772   54143 main.go:141] libmachine: (kubernetes-upgrade-372591)   </devices>
	I0729 18:06:21.147791   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) </domain>
	I0729 18:06:21.147805   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) 
	I0729 18:06:21.152256   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:41:85:22 in network default
	I0729 18:06:21.152734   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Ensuring networks are active...
	I0729 18:06:21.152768   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:21.153367   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Ensuring network default is active
	I0729 18:06:21.153632   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Ensuring network mk-kubernetes-upgrade-372591 is active
	I0729 18:06:21.154090   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Getting domain xml...
	I0729 18:06:21.154728   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Creating domain...
	I0729 18:06:22.688889   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Waiting to get IP...
	I0729 18:06:22.689838   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:22.690235   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | unable to find current IP address of domain kubernetes-upgrade-372591 in network mk-kubernetes-upgrade-372591
	I0729 18:06:22.690258   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:22.690218   54207 retry.go:31] will retry after 196.048251ms: waiting for machine to come up
	I0729 18:06:22.887641   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:22.888070   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | unable to find current IP address of domain kubernetes-upgrade-372591 in network mk-kubernetes-upgrade-372591
	I0729 18:06:22.888094   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:22.888038   54207 retry.go:31] will retry after 342.705545ms: waiting for machine to come up
	I0729 18:06:23.232734   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:23.233238   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | unable to find current IP address of domain kubernetes-upgrade-372591 in network mk-kubernetes-upgrade-372591
	I0729 18:06:23.233265   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:23.233199   54207 retry.go:31] will retry after 299.160921ms: waiting for machine to come up
	I0729 18:06:23.533775   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:23.534257   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | unable to find current IP address of domain kubernetes-upgrade-372591 in network mk-kubernetes-upgrade-372591
	I0729 18:06:23.534301   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:23.534207   54207 retry.go:31] will retry after 446.719244ms: waiting for machine to come up
	I0729 18:06:23.982937   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:23.983405   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | unable to find current IP address of domain kubernetes-upgrade-372591 in network mk-kubernetes-upgrade-372591
	I0729 18:06:23.983433   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:23.983357   54207 retry.go:31] will retry after 610.942439ms: waiting for machine to come up
	I0729 18:06:24.596724   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:24.597626   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | unable to find current IP address of domain kubernetes-upgrade-372591 in network mk-kubernetes-upgrade-372591
	I0729 18:06:24.597654   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:24.597586   54207 retry.go:31] will retry after 732.073997ms: waiting for machine to come up
	I0729 18:06:25.331654   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:25.331970   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | unable to find current IP address of domain kubernetes-upgrade-372591 in network mk-kubernetes-upgrade-372591
	I0729 18:06:25.332015   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:25.331910   54207 retry.go:31] will retry after 1.043695148s: waiting for machine to come up
	I0729 18:06:26.376844   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:26.377328   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | unable to find current IP address of domain kubernetes-upgrade-372591 in network mk-kubernetes-upgrade-372591
	I0729 18:06:26.377356   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:26.377292   54207 retry.go:31] will retry after 1.031062062s: waiting for machine to come up
	I0729 18:06:27.410410   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:27.410798   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | unable to find current IP address of domain kubernetes-upgrade-372591 in network mk-kubernetes-upgrade-372591
	I0729 18:06:27.410821   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:27.410756   54207 retry.go:31] will retry after 1.697919578s: waiting for machine to come up
	I0729 18:06:29.114477   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:29.114512   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | unable to find current IP address of domain kubernetes-upgrade-372591 in network mk-kubernetes-upgrade-372591
	I0729 18:06:29.114534   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:29.111135   54207 retry.go:31] will retry after 1.867629736s: waiting for machine to come up
	I0729 18:06:30.980790   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:30.981207   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | unable to find current IP address of domain kubernetes-upgrade-372591 in network mk-kubernetes-upgrade-372591
	I0729 18:06:30.981238   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:30.981158   54207 retry.go:31] will retry after 2.342430344s: waiting for machine to come up
	I0729 18:06:33.326660   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:33.327005   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | unable to find current IP address of domain kubernetes-upgrade-372591 in network mk-kubernetes-upgrade-372591
	I0729 18:06:33.327027   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:33.326974   54207 retry.go:31] will retry after 2.624262047s: waiting for machine to come up
	I0729 18:06:35.953351   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:35.953835   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | unable to find current IP address of domain kubernetes-upgrade-372591 in network mk-kubernetes-upgrade-372591
	I0729 18:06:35.953854   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:35.953786   54207 retry.go:31] will retry after 3.07884368s: waiting for machine to come up
	I0729 18:06:39.035902   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:39.036263   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | unable to find current IP address of domain kubernetes-upgrade-372591 in network mk-kubernetes-upgrade-372591
	I0729 18:06:39.036285   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | I0729 18:06:39.036227   54207 retry.go:31] will retry after 4.262113655s: waiting for machine to come up
	I0729 18:06:43.301332   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.301731   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Found IP for machine: 192.168.39.171
	I0729 18:06:43.301748   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Reserving static IP address...
	I0729 18:06:43.301776   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has current primary IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.302050   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-372591", mac: "52:54:00:f6:5d:7a", ip: "192.168.39.171"} in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.373590   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Reserved static IP address: 192.168.39.171
	I0729 18:06:43.373617   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | Getting to WaitForSSH function...
	I0729 18:06:43.373626   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Waiting for SSH to be available...
	I0729 18:06:43.375780   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.376151   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:43.376183   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.376378   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | Using SSH client type: external
	I0729 18:06:43.376405   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591/id_rsa (-rw-------)
	I0729 18:06:43.376452   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:06:43.376470   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | About to run SSH command:
	I0729 18:06:43.376487   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | exit 0
	I0729 18:06:43.498314   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | SSH cmd err, output: <nil>: 
	I0729 18:06:43.498550   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) KVM machine creation complete!
	I0729 18:06:43.498849   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetConfigRaw
	I0729 18:06:43.499380   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .DriverName
	I0729 18:06:43.499555   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .DriverName
	I0729 18:06:43.499711   54143 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 18:06:43.499734   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetState
	I0729 18:06:43.501192   54143 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 18:06:43.501207   54143 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 18:06:43.501213   54143 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 18:06:43.501218   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:06:43.503332   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.503694   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:43.503721   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.503875   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:06:43.504055   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:06:43.504168   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:06:43.504277   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:06:43.504390   54143 main.go:141] libmachine: Using SSH client type: native
	I0729 18:06:43.504607   54143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0729 18:06:43.504619   54143 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 18:06:43.605515   54143 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:06:43.605539   54143 main.go:141] libmachine: Detecting the provisioner...
	I0729 18:06:43.605549   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:06:43.608337   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.608720   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:43.608753   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.608868   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:06:43.609079   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:06:43.609243   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:06:43.609400   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:06:43.609565   54143 main.go:141] libmachine: Using SSH client type: native
	I0729 18:06:43.609722   54143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0729 18:06:43.609732   54143 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 18:06:43.714847   54143 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 18:06:43.714914   54143 main.go:141] libmachine: found compatible host: buildroot
	I0729 18:06:43.714923   54143 main.go:141] libmachine: Provisioning with buildroot...
	I0729 18:06:43.714931   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetMachineName
	I0729 18:06:43.715172   54143 buildroot.go:166] provisioning hostname "kubernetes-upgrade-372591"
	I0729 18:06:43.715197   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetMachineName
	I0729 18:06:43.715400   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:06:43.718043   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.718496   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:43.718521   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.718668   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:06:43.718825   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:06:43.718993   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:06:43.719099   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:06:43.719241   54143 main.go:141] libmachine: Using SSH client type: native
	I0729 18:06:43.719481   54143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0729 18:06:43.719496   54143 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-372591 && echo "kubernetes-upgrade-372591" | sudo tee /etc/hostname
	I0729 18:06:43.835719   54143 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-372591
	
	I0729 18:06:43.835749   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:06:43.838373   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.838721   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:43.838751   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.838916   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:06:43.839120   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:06:43.839287   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:06:43.839419   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:06:43.839571   54143 main.go:141] libmachine: Using SSH client type: native
	I0729 18:06:43.839800   54143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0729 18:06:43.839826   54143 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-372591' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-372591/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-372591' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:06:43.951637   54143 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:06:43.951671   54143 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:06:43.951720   54143 buildroot.go:174] setting up certificates
	I0729 18:06:43.951738   54143 provision.go:84] configureAuth start
	I0729 18:06:43.951755   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetMachineName
	I0729 18:06:43.952070   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetIP
	I0729 18:06:43.954632   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.954979   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:43.955006   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.955102   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:06:43.957423   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.957822   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:43.957877   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:43.957949   54143 provision.go:143] copyHostCerts
	I0729 18:06:43.958002   54143 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:06:43.958014   54143 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:06:43.958091   54143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:06:43.958207   54143 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:06:43.958217   54143 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:06:43.958244   54143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:06:43.958316   54143 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:06:43.958323   54143 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:06:43.958343   54143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:06:43.958426   54143 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-372591 san=[127.0.0.1 192.168.39.171 kubernetes-upgrade-372591 localhost minikube]
	I0729 18:06:44.099744   54143 provision.go:177] copyRemoteCerts
	I0729 18:06:44.099803   54143 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:06:44.099830   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:06:44.102284   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.102583   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:44.102612   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.102779   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:06:44.102951   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:06:44.103123   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:06:44.103226   54143 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591/id_rsa Username:docker}
	I0729 18:06:44.184626   54143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:06:44.207785   54143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0729 18:06:44.230188   54143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:06:44.252708   54143 provision.go:87] duration metric: took 300.955086ms to configureAuth
	I0729 18:06:44.252737   54143 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:06:44.252874   54143 config.go:182] Loaded profile config "kubernetes-upgrade-372591": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:06:44.252957   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:06:44.255336   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.255613   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:44.255641   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.255826   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:06:44.256007   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:06:44.256156   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:06:44.256257   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:06:44.256410   54143 main.go:141] libmachine: Using SSH client type: native
	I0729 18:06:44.256614   54143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0729 18:06:44.256637   54143 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:06:44.523222   54143 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:06:44.523251   54143 main.go:141] libmachine: Checking connection to Docker...
	I0729 18:06:44.523263   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetURL
	I0729 18:06:44.524498   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | Using libvirt version 6000000
	I0729 18:06:44.526682   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.527007   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:44.527046   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.527218   54143 main.go:141] libmachine: Docker is up and running!
	I0729 18:06:44.527234   54143 main.go:141] libmachine: Reticulating splines...
	I0729 18:06:44.527240   54143 client.go:171] duration metric: took 24.030322018s to LocalClient.Create
	I0729 18:06:44.527261   54143 start.go:167] duration metric: took 24.030376637s to libmachine.API.Create "kubernetes-upgrade-372591"
	I0729 18:06:44.527271   54143 start.go:293] postStartSetup for "kubernetes-upgrade-372591" (driver="kvm2")
	I0729 18:06:44.527280   54143 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:06:44.527294   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .DriverName
	I0729 18:06:44.527507   54143 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:06:44.527534   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:06:44.529542   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.529825   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:44.529854   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.529990   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:06:44.530164   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:06:44.530326   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:06:44.530481   54143 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591/id_rsa Username:docker}
	I0729 18:06:44.613867   54143 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:06:44.618022   54143 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:06:44.618048   54143 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:06:44.618129   54143 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:06:44.618219   54143 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:06:44.618323   54143 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:06:44.627918   54143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:06:44.651532   54143 start.go:296] duration metric: took 124.245317ms for postStartSetup
	I0729 18:06:44.651598   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetConfigRaw
	I0729 18:06:44.652145   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetIP
	I0729 18:06:44.654438   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.654732   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:44.654762   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.654956   54143 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/config.json ...
	I0729 18:06:44.655158   54143 start.go:128] duration metric: took 24.185648421s to createHost
	I0729 18:06:44.655185   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:06:44.657826   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.658232   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:44.658275   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.658459   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:06:44.658625   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:06:44.658755   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:06:44.658885   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:06:44.659024   54143 main.go:141] libmachine: Using SSH client type: native
	I0729 18:06:44.659194   54143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0729 18:06:44.659206   54143 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 18:06:44.763147   54143 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722276404.732213970
	
	I0729 18:06:44.763166   54143 fix.go:216] guest clock: 1722276404.732213970
	I0729 18:06:44.763173   54143 fix.go:229] Guest: 2024-07-29 18:06:44.73221397 +0000 UTC Remote: 2024-07-29 18:06:44.655171302 +0000 UTC m=+24.310058428 (delta=77.042668ms)
	I0729 18:06:44.763214   54143 fix.go:200] guest clock delta is within tolerance: 77.042668ms
	I0729 18:06:44.763221   54143 start.go:83] releasing machines lock for "kubernetes-upgrade-372591", held for 24.29377903s
	I0729 18:06:44.763252   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .DriverName
	I0729 18:06:44.763528   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetIP
	I0729 18:06:44.765907   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.766380   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:44.766416   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.766569   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .DriverName
	I0729 18:06:44.767031   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .DriverName
	I0729 18:06:44.767227   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .DriverName
	I0729 18:06:44.767321   54143 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:06:44.767365   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:06:44.767431   54143 ssh_runner.go:195] Run: cat /version.json
	I0729 18:06:44.767458   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:06:44.770143   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.770300   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.770501   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:44.770536   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.770698   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:06:44.770740   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:44.770768   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:44.770884   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:06:44.770957   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:06:44.771026   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:06:44.771095   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:06:44.771153   54143 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591/id_rsa Username:docker}
	I0729 18:06:44.771214   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:06:44.771341   54143 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591/id_rsa Username:docker}
	I0729 18:06:44.872215   54143 ssh_runner.go:195] Run: systemctl --version
	I0729 18:06:44.880839   54143 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:06:45.046954   54143 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:06:45.053117   54143 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:06:45.053201   54143 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:06:45.069529   54143 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:06:45.069551   54143 start.go:495] detecting cgroup driver to use...
	I0729 18:06:45.069613   54143 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:06:45.086154   54143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:06:45.101791   54143 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:06:45.101844   54143 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:06:45.117744   54143 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:06:45.131349   54143 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:06:45.262656   54143 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:06:45.423331   54143 docker.go:233] disabling docker service ...
	I0729 18:06:45.423396   54143 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:06:45.438957   54143 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:06:45.452750   54143 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:06:45.596241   54143 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:06:45.737463   54143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:06:45.751285   54143 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:06:45.772028   54143 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 18:06:45.772107   54143 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:06:45.784366   54143 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:06:45.784426   54143 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:06:45.796479   54143 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:06:45.808511   54143 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:06:45.820586   54143 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:06:45.831335   54143 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:06:45.842372   54143 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:06:45.842431   54143 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:06:45.859370   54143 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:06:45.871293   54143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:06:46.014198   54143 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:06:46.150723   54143 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:06:46.150797   54143 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:06:46.156467   54143 start.go:563] Will wait 60s for crictl version
	I0729 18:06:46.156533   54143 ssh_runner.go:195] Run: which crictl
	I0729 18:06:46.160350   54143 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:06:46.205536   54143 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:06:46.205624   54143 ssh_runner.go:195] Run: crio --version
	I0729 18:06:46.233938   54143 ssh_runner.go:195] Run: crio --version
	I0729 18:06:46.263900   54143 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 18:06:46.265140   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetIP
	I0729 18:06:46.268290   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:46.268668   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:06:36 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:06:46.268699   54143 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:06:46.268931   54143 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:06:46.273284   54143 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:06:46.286655   54143 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-372591 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-372591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:06:46.286796   54143 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:06:46.286860   54143 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:06:46.319174   54143 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:06:46.319266   54143 ssh_runner.go:195] Run: which lz4
	I0729 18:06:46.324048   54143 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 18:06:46.329496   54143 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:06:46.329527   54143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 18:06:48.016962   54143 crio.go:462] duration metric: took 1.693372878s to copy over tarball
	I0729 18:06:48.017042   54143 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:06:50.571468   54143 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.55438875s)
	I0729 18:06:50.571532   54143 crio.go:469] duration metric: took 2.554512913s to extract the tarball
	I0729 18:06:50.571545   54143 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:06:50.614899   54143 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:06:50.667953   54143 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:06:50.667977   54143 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:06:50.668026   54143 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:06:50.668063   54143 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:06:50.668087   54143 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:06:50.668147   54143 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:06:50.668150   54143 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:06:50.668149   54143 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 18:06:50.668182   54143 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 18:06:50.668399   54143 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:06:50.669362   54143 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 18:06:50.669562   54143 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:06:50.669574   54143 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:06:50.669574   54143 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:06:50.669576   54143 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:06:50.669562   54143 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:06:50.669598   54143 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 18:06:50.669618   54143 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:06:50.829195   54143 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:06:50.833084   54143 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 18:06:50.834294   54143 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:06:50.835889   54143 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 18:06:50.837906   54143 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:06:50.849349   54143 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:06:50.877191   54143 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 18:06:50.916129   54143 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 18:06:50.916182   54143 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:06:50.916229   54143 ssh_runner.go:195] Run: which crictl
	I0729 18:06:50.969357   54143 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:06:50.977315   54143 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 18:06:50.977357   54143 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 18:06:50.977375   54143 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:06:50.977414   54143 ssh_runner.go:195] Run: which crictl
	I0729 18:06:50.977358   54143 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:06:50.977501   54143 ssh_runner.go:195] Run: which crictl
	I0729 18:06:51.002035   54143 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 18:06:51.002081   54143 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:06:51.002140   54143 ssh_runner.go:195] Run: which crictl
	I0729 18:06:51.002154   54143 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 18:06:51.002184   54143 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 18:06:51.002232   54143 ssh_runner.go:195] Run: which crictl
	I0729 18:06:51.014691   54143 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 18:06:51.014737   54143 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:06:51.014791   54143 ssh_runner.go:195] Run: which crictl
	I0729 18:06:51.026660   54143 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:06:51.026771   54143 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 18:06:51.026796   54143 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 18:06:51.026827   54143 ssh_runner.go:195] Run: which crictl
	I0729 18:06:51.161197   54143 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 18:06:51.161223   54143 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:06:51.161324   54143 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:06:51.161337   54143 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 18:06:51.161388   54143 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:06:51.161470   54143 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 18:06:51.161517   54143 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 18:06:51.306613   54143 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 18:06:51.311585   54143 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 18:06:51.315014   54143 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 18:06:51.315073   54143 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 18:06:51.315087   54143 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 18:06:51.315150   54143 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 18:06:51.315207   54143 cache_images.go:92] duration metric: took 647.216173ms to LoadCachedImages
	W0729 18:06:51.315269   54143 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0729 18:06:51.315285   54143 kubeadm.go:934] updating node { 192.168.39.171 8443 v1.20.0 crio true true} ...
	I0729 18:06:51.315390   54143 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-372591 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-372591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:06:51.315458   54143 ssh_runner.go:195] Run: crio config
	I0729 18:06:51.367957   54143 cni.go:84] Creating CNI manager for ""
	I0729 18:06:51.367978   54143 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:06:51.367988   54143 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:06:51.368008   54143 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.171 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-372591 NodeName:kubernetes-upgrade-372591 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 18:06:51.368130   54143 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-372591"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:06:51.368189   54143 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 18:06:51.378631   54143 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:06:51.378704   54143 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:06:51.388712   54143 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0729 18:06:51.406712   54143 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:06:51.425654   54143 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0729 18:06:51.445071   54143 ssh_runner.go:195] Run: grep 192.168.39.171	control-plane.minikube.internal$ /etc/hosts
	I0729 18:06:51.449188   54143 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:06:51.462136   54143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:06:51.603819   54143 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:06:51.622899   54143 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591 for IP: 192.168.39.171
	I0729 18:06:51.622926   54143 certs.go:194] generating shared ca certs ...
	I0729 18:06:51.622946   54143 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:06:51.623117   54143 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:06:51.623172   54143 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:06:51.623185   54143 certs.go:256] generating profile certs ...
	I0729 18:06:51.623249   54143 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/client.key
	I0729 18:06:51.623268   54143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/client.crt with IP's: []
	I0729 18:06:51.787220   54143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/client.crt ...
	I0729 18:06:51.787247   54143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/client.crt: {Name:mk10f8df6cb6b71940178dd4249a380041174875 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:06:51.787428   54143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/client.key ...
	I0729 18:06:51.787449   54143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/client.key: {Name:mk2f37f32eaa7a857a5cc5e95466d71a75619fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:06:51.787538   54143 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/apiserver.key.d4f47de8
	I0729 18:06:51.787560   54143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/apiserver.crt.d4f47de8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.171]
	I0729 18:06:51.887251   54143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/apiserver.crt.d4f47de8 ...
	I0729 18:06:51.887286   54143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/apiserver.crt.d4f47de8: {Name:mk98572b384a927f1ee63e08ebb8332063ae7245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:06:51.887467   54143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/apiserver.key.d4f47de8 ...
	I0729 18:06:51.887494   54143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/apiserver.key.d4f47de8: {Name:mk07a433f7d36a5edb8fa2cae79e59ff90ee3042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:06:51.887604   54143 certs.go:381] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/apiserver.crt.d4f47de8 -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/apiserver.crt
	I0729 18:06:51.887701   54143 certs.go:385] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/apiserver.key.d4f47de8 -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/apiserver.key
	I0729 18:06:51.887781   54143 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/proxy-client.key
	I0729 18:06:51.887803   54143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/proxy-client.crt with IP's: []
	I0729 18:06:52.006529   54143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/proxy-client.crt ...
	I0729 18:06:52.006558   54143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/proxy-client.crt: {Name:mk33691c1c5ce4ecd143f3c31dd8cebc8939c18a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:06:52.006716   54143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/proxy-client.key ...
	I0729 18:06:52.006730   54143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/proxy-client.key: {Name:mkfdb98915c92862bcf92be39b64b3697f6381f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:06:52.006904   54143 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:06:52.006946   54143 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:06:52.006963   54143 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:06:52.006989   54143 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:06:52.007015   54143 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:06:52.007039   54143 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:06:52.007075   54143 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:06:52.007683   54143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:06:52.033387   54143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:06:52.057518   54143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:06:52.081654   54143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:06:52.109088   54143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 18:06:52.137215   54143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:06:52.165652   54143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:06:52.193897   54143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 18:06:52.219526   54143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:06:52.243963   54143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:06:52.271578   54143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:06:52.297466   54143 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:06:52.313887   54143 ssh_runner.go:195] Run: openssl version
	I0729 18:06:52.319766   54143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:06:52.330665   54143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:06:52.335488   54143 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:06:52.335566   54143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:06:52.343584   54143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:06:52.354522   54143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:06:52.365349   54143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:06:52.371607   54143 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:06:52.371674   54143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:06:52.379572   54143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:06:52.392517   54143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:06:52.404009   54143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:06:52.408672   54143 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:06:52.408734   54143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:06:52.414735   54143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:06:52.431053   54143 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:06:52.435537   54143 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 18:06:52.435619   54143 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-372591 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-372591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:06:52.435714   54143 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:06:52.435801   54143 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:06:52.484475   54143 cri.go:89] found id: ""
	I0729 18:06:52.484544   54143 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:06:52.498957   54143 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:06:52.514873   54143 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:06:52.532012   54143 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:06:52.532037   54143 kubeadm.go:157] found existing configuration files:
	
	I0729 18:06:52.532094   54143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:06:52.545721   54143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:06:52.545786   54143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:06:52.556569   54143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:06:52.567250   54143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:06:52.567329   54143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:06:52.576832   54143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:06:52.586385   54143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:06:52.586448   54143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:06:52.595871   54143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:06:52.605341   54143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:06:52.605401   54143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:06:52.615449   54143 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:06:52.893804   54143 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:08:50.168546   54143 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:08:50.168652   54143 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 18:08:50.170217   54143 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:08:50.170284   54143 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:08:50.170421   54143 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:08:50.170550   54143 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:08:50.170687   54143 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:08:50.170786   54143 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:08:50.172724   54143 out.go:204]   - Generating certificates and keys ...
	I0729 18:08:50.172816   54143 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:08:50.172891   54143 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:08:50.172972   54143 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 18:08:50.173046   54143 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 18:08:50.173132   54143 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 18:08:50.173201   54143 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 18:08:50.173280   54143 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 18:08:50.173470   54143 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-372591 localhost] and IPs [192.168.39.171 127.0.0.1 ::1]
	I0729 18:08:50.173554   54143 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 18:08:50.173692   54143 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-372591 localhost] and IPs [192.168.39.171 127.0.0.1 ::1]
	I0729 18:08:50.173783   54143 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 18:08:50.173869   54143 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 18:08:50.173932   54143 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 18:08:50.174009   54143 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:08:50.174089   54143 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:08:50.174168   54143 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:08:50.174262   54143 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:08:50.174338   54143 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:08:50.174477   54143 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:08:50.174602   54143 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:08:50.174680   54143 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:08:50.174770   54143 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:08:50.176145   54143 out.go:204]   - Booting up control plane ...
	I0729 18:08:50.176285   54143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:08:50.176410   54143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:08:50.176508   54143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:08:50.176625   54143 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:08:50.176808   54143 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:08:50.176878   54143 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:08:50.176941   54143 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:08:50.177147   54143 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:08:50.177234   54143 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:08:50.177434   54143 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:08:50.177525   54143 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:08:50.177776   54143 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:08:50.177881   54143 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:08:50.178122   54143 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:08:50.178224   54143 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:08:50.178481   54143 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:08:50.178496   54143 kubeadm.go:310] 
	I0729 18:08:50.178558   54143 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:08:50.178611   54143 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:08:50.178620   54143 kubeadm.go:310] 
	I0729 18:08:50.178652   54143 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:08:50.178681   54143 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:08:50.178851   54143 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:08:50.178869   54143 kubeadm.go:310] 
	I0729 18:08:50.179031   54143 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:08:50.179082   54143 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:08:50.179130   54143 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:08:50.179139   54143 kubeadm.go:310] 
	I0729 18:08:50.179305   54143 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:08:50.179432   54143 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:08:50.179443   54143 kubeadm.go:310] 
	I0729 18:08:50.179599   54143 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:08:50.179739   54143 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:08:50.179814   54143 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:08:50.179874   54143 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:08:50.179924   54143 kubeadm.go:310] 
	W0729 18:08:50.180052   54143 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-372591 localhost] and IPs [192.168.39.171 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-372591 localhost] and IPs [192.168.39.171 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-372591 localhost] and IPs [192.168.39.171 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-372591 localhost] and IPs [192.168.39.171 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 18:08:50.180108   54143 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:08:51.379462   54143 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.199323669s)
	I0729 18:08:51.379538   54143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:08:51.399474   54143 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:08:51.414888   54143 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:08:51.414908   54143 kubeadm.go:157] found existing configuration files:
	
	I0729 18:08:51.414962   54143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:08:51.431106   54143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:08:51.431167   54143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:08:51.445679   54143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:08:51.459828   54143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:08:51.459923   54143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:08:51.473152   54143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:08:51.486601   54143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:08:51.486669   54143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:08:51.499988   54143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:08:51.512916   54143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:08:51.512982   54143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:08:51.525977   54143 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:08:51.628802   54143 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:08:51.629288   54143 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:08:51.841420   54143 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:08:51.841559   54143 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:08:51.841678   54143 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:08:52.114376   54143 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:08:52.116383   54143 out.go:204]   - Generating certificates and keys ...
	I0729 18:08:52.118343   54143 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:08:52.118431   54143 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:08:52.118527   54143 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:08:52.118611   54143 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:08:52.118709   54143 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:08:52.118771   54143 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:08:52.118847   54143 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:08:52.118928   54143 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:08:52.119610   54143 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:08:52.121433   54143 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:08:52.121475   54143 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:08:52.121561   54143 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:08:53.007152   54143 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:08:53.253896   54143 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:08:53.350143   54143 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:08:53.484792   54143 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:08:53.502656   54143 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:08:53.503989   54143 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:08:53.504056   54143 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:08:53.684574   54143 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:08:53.686344   54143 out.go:204]   - Booting up control plane ...
	I0729 18:08:53.686498   54143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:08:53.697056   54143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:08:53.698856   54143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:08:53.700137   54143 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:08:53.703570   54143 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:09:33.706319   54143 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:09:33.706750   54143 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:09:33.707029   54143 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:09:38.707677   54143 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:09:38.708059   54143 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:09:48.708867   54143 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:09:48.709081   54143 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:10:08.708557   54143 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:10:08.708819   54143 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:10:48.709422   54143 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:10:48.709587   54143 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:10:48.709598   54143 kubeadm.go:310] 
	I0729 18:10:48.709666   54143 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:10:48.709732   54143 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:10:48.709744   54143 kubeadm.go:310] 
	I0729 18:10:48.709789   54143 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:10:48.709824   54143 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:10:48.709929   54143 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:10:48.709939   54143 kubeadm.go:310] 
	I0729 18:10:48.710024   54143 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:10:48.710053   54143 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:10:48.710086   54143 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:10:48.710100   54143 kubeadm.go:310] 
	I0729 18:10:48.710253   54143 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:10:48.710410   54143 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:10:48.710422   54143 kubeadm.go:310] 
	I0729 18:10:48.710547   54143 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:10:48.710659   54143 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:10:48.710761   54143 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:10:48.710868   54143 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:10:48.710882   54143 kubeadm.go:310] 
	I0729 18:10:48.711826   54143 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:10:48.711922   54143 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:10:48.712012   54143 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 18:10:48.712093   54143 kubeadm.go:394] duration metric: took 3m56.276475791s to StartCluster
	I0729 18:10:48.712170   54143 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:10:48.712243   54143 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:10:48.755937   54143 cri.go:89] found id: ""
	I0729 18:10:48.755965   54143 logs.go:276] 0 containers: []
	W0729 18:10:48.755976   54143 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:10:48.755984   54143 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:10:48.756042   54143 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:10:48.794873   54143 cri.go:89] found id: ""
	I0729 18:10:48.794897   54143 logs.go:276] 0 containers: []
	W0729 18:10:48.794907   54143 logs.go:278] No container was found matching "etcd"
	I0729 18:10:48.794915   54143 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:10:48.794983   54143 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:10:48.833287   54143 cri.go:89] found id: ""
	I0729 18:10:48.833311   54143 logs.go:276] 0 containers: []
	W0729 18:10:48.833325   54143 logs.go:278] No container was found matching "coredns"
	I0729 18:10:48.833330   54143 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:10:48.833380   54143 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:10:48.870021   54143 cri.go:89] found id: ""
	I0729 18:10:48.870046   54143 logs.go:276] 0 containers: []
	W0729 18:10:48.870054   54143 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:10:48.870059   54143 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:10:48.870117   54143 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:10:48.905669   54143 cri.go:89] found id: ""
	I0729 18:10:48.905713   54143 logs.go:276] 0 containers: []
	W0729 18:10:48.905721   54143 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:10:48.905727   54143 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:10:48.905774   54143 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:10:48.938588   54143 cri.go:89] found id: ""
	I0729 18:10:48.938612   54143 logs.go:276] 0 containers: []
	W0729 18:10:48.938624   54143 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:10:48.938633   54143 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:10:48.938695   54143 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:10:48.971581   54143 cri.go:89] found id: ""
	I0729 18:10:48.971614   54143 logs.go:276] 0 containers: []
	W0729 18:10:48.971625   54143 logs.go:278] No container was found matching "kindnet"
	I0729 18:10:48.971636   54143 logs.go:123] Gathering logs for container status ...
	I0729 18:10:48.971650   54143 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:10:49.022964   54143 logs.go:123] Gathering logs for kubelet ...
	I0729 18:10:49.022989   54143 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:10:49.075617   54143 logs.go:123] Gathering logs for dmesg ...
	I0729 18:10:49.075645   54143 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:10:49.089201   54143 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:10:49.089226   54143 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:10:49.225063   54143 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:10:49.225091   54143 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:10:49.225106   54143 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0729 18:10:49.328451   54143 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 18:10:49.328497   54143 out.go:239] * 
	* 
	W0729 18:10:49.328558   54143 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:10:49.328580   54143 out.go:239] * 
	* 
	W0729 18:10:49.329452   54143 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:10:49.332996   54143 out.go:177] 
	W0729 18:10:49.334204   54143 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:10:49.334249   54143 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 18:10:49.334270   54143 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 18:10:49.335725   54143 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-372591 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-372591
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-372591: (1.359274089s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-372591 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-372591 status --format={{.Host}}: exit status 7 (70.432538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-372591 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-372591 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.006778112s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-372591 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-372591 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-372591 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (75.36297ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-372591] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19345
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-372591
	    minikube start -p kubernetes-upgrade-372591 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3725912 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-372591 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-372591 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-372591 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.482273351s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-29 18:13:00.447538922 +0000 UTC m=+4637.114907222
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-372591 -n kubernetes-upgrade-372591
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-372591 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-372591 logs -n 25: (2.109265225s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-151120                       | pause-151120              | jenkins | v1.33.1 | 29 Jul 24 18:10 UTC | 29 Jul 24 18:10 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-151120                       | pause-151120              | jenkins | v1.33.1 | 29 Jul 24 18:10 UTC | 29 Jul 24 18:10 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-151120                       | pause-151120              | jenkins | v1.33.1 | 29 Jul 24 18:10 UTC | 29 Jul 24 18:10 UTC |
	| start   | -p force-systemd-env-900095           | force-systemd-env-900095  | jenkins | v1.33.1 | 29 Jul 24 18:10 UTC | 29 Jul 24 18:10 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-444361                | NoKubernetes-444361       | jenkins | v1.33.1 | 29 Jul 24 18:10 UTC | 29 Jul 24 18:10 UTC |
	| start   | -p NoKubernetes-444361                | NoKubernetes-444361       | jenkins | v1.33.1 | 29 Jul 24 18:10 UTC | 29 Jul 24 18:10 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-019533 ssh cat     | force-systemd-flag-019533 | jenkins | v1.33.1 | 29 Jul 24 18:10 UTC | 29 Jul 24 18:10 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-019533          | force-systemd-flag-019533 | jenkins | v1.33.1 | 29 Jul 24 18:10 UTC | 29 Jul 24 18:10 UTC |
	| start   | -p cert-expiration-548627             | cert-expiration-548627    | jenkins | v1.33.1 | 29 Jul 24 18:10 UTC | 29 Jul 24 18:11 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-372591          | kubernetes-upgrade-372591 | jenkins | v1.33.1 | 29 Jul 24 18:10 UTC | 29 Jul 24 18:10 UTC |
	| start   | -p kubernetes-upgrade-372591          | kubernetes-upgrade-372591 | jenkins | v1.33.1 | 29 Jul 24 18:10 UTC | 29 Jul 24 18:11 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-900095           | force-systemd-env-900095  | jenkins | v1.33.1 | 29 Jul 24 18:10 UTC | 29 Jul 24 18:10 UTC |
	| start   | -p cert-options-233394                | cert-options-233394       | jenkins | v1.33.1 | 29 Jul 24 18:10 UTC | 29 Jul 24 18:12 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-444361 sudo           | NoKubernetes-444361       | jenkins | v1.33.1 | 29 Jul 24 18:10 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-444361                | NoKubernetes-444361       | jenkins | v1.33.1 | 29 Jul 24 18:10 UTC | 29 Jul 24 18:11 UTC |
	| start   | -p NoKubernetes-444361                | NoKubernetes-444361       | jenkins | v1.33.1 | 29 Jul 24 18:11 UTC | 29 Jul 24 18:12 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-372591          | kubernetes-upgrade-372591 | jenkins | v1.33.1 | 29 Jul 24 18:11 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-372591          | kubernetes-upgrade-372591 | jenkins | v1.33.1 | 29 Jul 24 18:11 UTC | 29 Jul 24 18:13 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-233394 ssh               | cert-options-233394       | jenkins | v1.33.1 | 29 Jul 24 18:12 UTC | 29 Jul 24 18:12 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-233394 -- sudo        | cert-options-233394       | jenkins | v1.33.1 | 29 Jul 24 18:12 UTC | 29 Jul 24 18:12 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-233394                | cert-options-233394       | jenkins | v1.33.1 | 29 Jul 24 18:12 UTC | 29 Jul 24 18:12 UTC |
	| start   | -p auto-729010 --memory=3072          | auto-729010               | jenkins | v1.33.1 | 29 Jul 24 18:12 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-444361 sudo           | NoKubernetes-444361       | jenkins | v1.33.1 | 29 Jul 24 18:12 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-444361                | NoKubernetes-444361       | jenkins | v1.33.1 | 29 Jul 24 18:12 UTC | 29 Jul 24 18:12 UTC |
	| start   | -p kindnet-729010                     | kindnet-729010            | jenkins | v1.33.1 | 29 Jul 24 18:12 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:12:30
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:12:30.499603   62086 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:12:30.499847   62086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:12:30.499855   62086 out.go:304] Setting ErrFile to fd 2...
	I0729 18:12:30.499859   62086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:12:30.500031   62086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:12:30.500558   62086 out.go:298] Setting JSON to false
	I0729 18:12:30.501470   62086 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6902,"bootTime":1722269848,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:12:30.501525   62086 start.go:139] virtualization: kvm guest
	I0729 18:12:30.503697   62086 out.go:177] * [kindnet-729010] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:12:30.505045   62086 notify.go:220] Checking for updates...
	I0729 18:12:30.505093   62086 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 18:12:30.506546   62086 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:12:30.507747   62086 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:12:30.508869   62086 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:12:30.509877   62086 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:12:30.510991   62086 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:12:30.512515   62086 config.go:182] Loaded profile config "auto-729010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:12:30.512635   62086 config.go:182] Loaded profile config "cert-expiration-548627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:12:30.512746   62086 config.go:182] Loaded profile config "kubernetes-upgrade-372591": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:12:30.512842   62086 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:12:30.550254   62086 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 18:12:30.551568   62086 start.go:297] selected driver: kvm2
	I0729 18:12:30.551583   62086 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:12:30.551593   62086 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:12:30.552342   62086 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:12:30.552412   62086 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:12:30.567564   62086 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:12:30.567632   62086 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 18:12:30.567926   62086 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:12:30.567984   62086 cni.go:84] Creating CNI manager for "kindnet"
	I0729 18:12:30.567993   62086 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 18:12:30.568060   62086 start.go:340] cluster config:
	{Name:kindnet-729010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-729010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:12:30.568156   62086 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:12:30.569978   62086 out.go:177] * Starting "kindnet-729010" primary control-plane node in "kindnet-729010" cluster
	I0729 18:12:27.339960   61512 machine.go:94] provisionDockerMachine start ...
	I0729 18:12:27.339984   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .DriverName
	I0729 18:12:27.340198   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:12:27.342424   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:27.342795   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:11:30 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:12:27.342838   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:27.342958   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:12:27.343130   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:12:27.343276   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:12:27.343402   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:12:27.343567   61512 main.go:141] libmachine: Using SSH client type: native
	I0729 18:12:27.343758   61512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0729 18:12:27.343772   61512 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:12:27.451566   61512 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-372591
	
	I0729 18:12:27.451601   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetMachineName
	I0729 18:12:27.451820   61512 buildroot.go:166] provisioning hostname "kubernetes-upgrade-372591"
	I0729 18:12:27.451836   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetMachineName
	I0729 18:12:27.452025   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:12:27.454696   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:27.455072   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:11:30 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:12:27.455103   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:27.455173   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:12:27.455348   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:12:27.455494   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:12:27.455638   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:12:27.455787   61512 main.go:141] libmachine: Using SSH client type: native
	I0729 18:12:27.456025   61512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0729 18:12:27.456044   61512 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-372591 && echo "kubernetes-upgrade-372591" | sudo tee /etc/hostname
	I0729 18:12:27.586306   61512 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-372591
	
	I0729 18:12:27.586336   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:12:27.589060   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:27.589402   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:11:30 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:12:27.589440   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:27.589630   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:12:27.589828   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:12:27.589985   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:12:27.590147   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:12:27.590335   61512 main.go:141] libmachine: Using SSH client type: native
	I0729 18:12:27.590535   61512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0729 18:12:27.590560   61512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-372591' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-372591/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-372591' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:12:27.703745   61512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:12:27.703825   61512 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:12:27.703869   61512 buildroot.go:174] setting up certificates
	I0729 18:12:27.703880   61512 provision.go:84] configureAuth start
	I0729 18:12:27.703894   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetMachineName
	I0729 18:12:27.704197   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetIP
	I0729 18:12:27.707098   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:27.707508   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:11:30 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:12:27.707536   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:27.707691   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:12:27.710059   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:27.710404   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:11:30 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:12:27.710434   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:27.710685   61512 provision.go:143] copyHostCerts
	I0729 18:12:27.710764   61512 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:12:27.710776   61512 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:12:27.710851   61512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:12:27.710975   61512 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:12:27.710987   61512 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:12:27.711017   61512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:12:27.711108   61512 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:12:27.711120   61512 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:12:27.711151   61512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:12:27.711240   61512 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-372591 san=[127.0.0.1 192.168.39.171 kubernetes-upgrade-372591 localhost minikube]
	I0729 18:12:27.841236   61512 provision.go:177] copyRemoteCerts
	I0729 18:12:27.841321   61512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:12:27.841350   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:12:27.844017   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:27.844567   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:11:30 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:12:27.844594   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:27.844811   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:12:27.844995   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:12:27.845174   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:12:27.845321   61512 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591/id_rsa Username:docker}
	I0729 18:12:27.932119   61512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:12:27.960780   61512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0729 18:12:27.988430   61512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:12:28.013484   61512 provision.go:87] duration metric: took 309.591915ms to configureAuth
	I0729 18:12:28.013515   61512 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:12:28.013734   61512 config.go:182] Loaded profile config "kubernetes-upgrade-372591": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:12:28.013812   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:12:28.016615   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:28.016935   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:11:30 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:12:28.016965   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:28.017112   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:12:28.017337   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:12:28.017504   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:12:28.017665   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:12:28.017853   61512 main.go:141] libmachine: Using SSH client type: native
	I0729 18:12:28.018052   61512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0729 18:12:28.018081   61512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:12:34.271073   61923 start.go:364] duration metric: took 11.261072593s to acquireMachinesLock for "auto-729010"
	I0729 18:12:34.271154   61923 start.go:93] Provisioning new machine with config: &{Name:auto-729010 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:auto-729010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:12:34.271304   61923 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 18:12:30.571281   62086 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:12:30.571313   62086 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 18:12:30.571323   62086 cache.go:56] Caching tarball of preloaded images
	I0729 18:12:30.571436   62086 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:12:30.571446   62086 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:12:30.571534   62086 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/config.json ...
	I0729 18:12:30.571549   62086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/config.json: {Name:mk23d0a23c71820a02a7a9917d7e51f8a485dc25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:12:30.571668   62086 start.go:360] acquireMachinesLock for kindnet-729010: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:12:34.031711   61512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:12:34.031734   61512 machine.go:97] duration metric: took 6.69175816s to provisionDockerMachine
	I0729 18:12:34.031748   61512 start.go:293] postStartSetup for "kubernetes-upgrade-372591" (driver="kvm2")
	I0729 18:12:34.031762   61512 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:12:34.031781   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .DriverName
	I0729 18:12:34.032143   61512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:12:34.032184   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:12:34.035198   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:34.035683   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:11:30 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:12:34.035712   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:34.035906   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:12:34.036147   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:12:34.036317   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:12:34.036485   61512 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591/id_rsa Username:docker}
	I0729 18:12:34.121084   61512 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:12:34.125498   61512 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:12:34.125522   61512 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:12:34.125588   61512 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:12:34.125690   61512 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:12:34.125811   61512 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:12:34.134915   61512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:12:34.160015   61512 start.go:296] duration metric: took 128.252428ms for postStartSetup
	I0729 18:12:34.160055   61512 fix.go:56] duration metric: took 6.844637811s for fixHost
	I0729 18:12:34.160073   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:12:34.162592   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:34.162936   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:11:30 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:12:34.162961   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:34.163112   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:12:34.163312   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:12:34.163469   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:12:34.163611   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:12:34.163750   61512 main.go:141] libmachine: Using SSH client type: native
	I0729 18:12:34.163915   61512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0729 18:12:34.163925   61512 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:12:34.270923   61512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722276754.261247888
	
	I0729 18:12:34.270940   61512 fix.go:216] guest clock: 1722276754.261247888
	I0729 18:12:34.270946   61512 fix.go:229] Guest: 2024-07-29 18:12:34.261247888 +0000 UTC Remote: 2024-07-29 18:12:34.160058937 +0000 UTC m=+37.191712090 (delta=101.188951ms)
	I0729 18:12:34.270975   61512 fix.go:200] guest clock delta is within tolerance: 101.188951ms
	I0729 18:12:34.270980   61512 start.go:83] releasing machines lock for "kubernetes-upgrade-372591", held for 6.955600329s
	I0729 18:12:34.271000   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .DriverName
	I0729 18:12:34.271255   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetIP
	I0729 18:12:34.274101   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:34.274534   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:11:30 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:12:34.274567   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:34.274667   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .DriverName
	I0729 18:12:34.275376   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .DriverName
	I0729 18:12:34.275553   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .DriverName
	I0729 18:12:34.275642   61512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:12:34.275697   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:12:34.275774   61512 ssh_runner.go:195] Run: cat /version.json
	I0729 18:12:34.275791   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHHostname
	I0729 18:12:34.278550   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:34.278736   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:34.278887   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:11:30 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:12:34.278915   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:34.279061   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:12:34.279151   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:11:30 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:12:34.279188   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:34.279254   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:12:34.279397   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:12:34.279447   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHPort
	I0729 18:12:34.279533   61512 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591/id_rsa Username:docker}
	I0729 18:12:34.279595   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHKeyPath
	I0729 18:12:34.279698   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetSSHUsername
	I0729 18:12:34.279826   61512 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/kubernetes-upgrade-372591/id_rsa Username:docker}
	I0729 18:12:34.378715   61512 ssh_runner.go:195] Run: systemctl --version
	I0729 18:12:34.387074   61512 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:12:34.540348   61512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:12:34.547392   61512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:12:34.547466   61512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:12:34.558696   61512 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 18:12:34.558724   61512 start.go:495] detecting cgroup driver to use...
	I0729 18:12:34.558814   61512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:12:34.576002   61512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:12:34.591609   61512 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:12:34.591663   61512 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:12:34.607680   61512 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:12:34.622936   61512 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:12:34.768208   61512 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:12:34.913737   61512 docker.go:233] disabling docker service ...
	I0729 18:12:34.913804   61512 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:12:34.934850   61512 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:12:34.949655   61512 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:12:35.101009   61512 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:12:35.260977   61512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:12:35.277589   61512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:12:35.297047   61512 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 18:12:35.297114   61512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:12:35.308131   61512 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:12:35.308206   61512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:12:35.319338   61512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:12:35.330242   61512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:12:35.341008   61512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:12:35.352295   61512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:12:35.364027   61512 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:12:35.379345   61512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:12:35.390219   61512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:12:35.400355   61512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:12:35.409614   61512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:12:35.563961   61512 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:12:36.185527   61512 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:12:36.185597   61512 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:12:36.191409   61512 start.go:563] Will wait 60s for crictl version
	I0729 18:12:36.191468   61512 ssh_runner.go:195] Run: which crictl
	I0729 18:12:36.196437   61512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:12:36.231622   61512 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:12:36.231702   61512 ssh_runner.go:195] Run: crio --version
	I0729 18:12:36.261328   61512 ssh_runner.go:195] Run: crio --version
	I0729 18:12:36.293103   61512 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 18:12:36.294549   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) Calling .GetIP
	I0729 18:12:36.297744   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:36.298102   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:5d:7a", ip: ""} in network mk-kubernetes-upgrade-372591: {Iface:virbr1 ExpiryTime:2024-07-29 19:11:30 +0000 UTC Type:0 Mac:52:54:00:f6:5d:7a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:kubernetes-upgrade-372591 Clientid:01:52:54:00:f6:5d:7a}
	I0729 18:12:36.298131   61512 main.go:141] libmachine: (kubernetes-upgrade-372591) DBG | domain kubernetes-upgrade-372591 has defined IP address 192.168.39.171 and MAC address 52:54:00:f6:5d:7a in network mk-kubernetes-upgrade-372591
	I0729 18:12:36.298355   61512 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:12:36.302911   61512 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-372591 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-372591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:12:36.303024   61512 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 18:12:36.303121   61512 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:12:36.346482   61512 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:12:36.346502   61512 crio.go:433] Images already preloaded, skipping extraction
	I0729 18:12:36.346547   61512 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:12:36.381748   61512 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:12:36.381771   61512 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:12:36.381778   61512 kubeadm.go:934] updating node { 192.168.39.171 8443 v1.31.0-beta.0 crio true true} ...
	I0729 18:12:36.381882   61512 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-372591 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-372591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:12:36.381953   61512 ssh_runner.go:195] Run: crio config
	I0729 18:12:36.428487   61512 cni.go:84] Creating CNI manager for ""
	I0729 18:12:36.428524   61512 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:12:36.428541   61512 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:12:36.428562   61512 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.171 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-372591 NodeName:kubernetes-upgrade-372591 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:12:36.428699   61512 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-372591"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:12:36.428758   61512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 18:12:36.440657   61512 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:12:36.440732   61512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:12:36.451597   61512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0729 18:12:36.469780   61512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 18:12:36.488508   61512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I0729 18:12:36.508845   61512 ssh_runner.go:195] Run: grep 192.168.39.171	control-plane.minikube.internal$ /etc/hosts
	I0729 18:12:36.513078   61512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:12:36.663129   61512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:12:36.678176   61512 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591 for IP: 192.168.39.171
	I0729 18:12:36.678200   61512 certs.go:194] generating shared ca certs ...
	I0729 18:12:36.678216   61512 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:12:36.678399   61512 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:12:36.678445   61512 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:12:36.678456   61512 certs.go:256] generating profile certs ...
	I0729 18:12:36.678527   61512 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/client.key
	I0729 18:12:36.678571   61512 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/apiserver.key.d4f47de8
	I0729 18:12:36.678603   61512 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/proxy-client.key
	I0729 18:12:36.678706   61512 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:12:36.678733   61512 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:12:36.678739   61512 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:12:36.678760   61512 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:12:36.678780   61512 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:12:36.678800   61512 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:12:36.678838   61512 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:12:36.679456   61512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:12:36.707416   61512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:12:36.732823   61512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:12:36.757951   61512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:12:36.784015   61512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 18:12:36.810037   61512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:12:36.835468   61512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:12:36.863239   61512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kubernetes-upgrade-372591/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 18:12:36.938040   61512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:12:34.273113   61923 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 18:12:34.273320   61923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:12:34.273370   61923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:12:34.292552   61923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34003
	I0729 18:12:34.293000   61923 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:12:34.293600   61923 main.go:141] libmachine: Using API Version  1
	I0729 18:12:34.293626   61923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:12:34.293921   61923 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:12:34.294115   61923 main.go:141] libmachine: (auto-729010) Calling .GetMachineName
	I0729 18:12:34.294315   61923 main.go:141] libmachine: (auto-729010) Calling .DriverName
	I0729 18:12:34.294489   61923 start.go:159] libmachine.API.Create for "auto-729010" (driver="kvm2")
	I0729 18:12:34.294518   61923 client.go:168] LocalClient.Create starting
	I0729 18:12:34.294548   61923 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem
	I0729 18:12:34.294581   61923 main.go:141] libmachine: Decoding PEM data...
	I0729 18:12:34.294604   61923 main.go:141] libmachine: Parsing certificate...
	I0729 18:12:34.294673   61923 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem
	I0729 18:12:34.294699   61923 main.go:141] libmachine: Decoding PEM data...
	I0729 18:12:34.294716   61923 main.go:141] libmachine: Parsing certificate...
	I0729 18:12:34.294742   61923 main.go:141] libmachine: Running pre-create checks...
	I0729 18:12:34.294754   61923 main.go:141] libmachine: (auto-729010) Calling .PreCreateCheck
	I0729 18:12:34.295149   61923 main.go:141] libmachine: (auto-729010) Calling .GetConfigRaw
	I0729 18:12:34.295581   61923 main.go:141] libmachine: Creating machine...
	I0729 18:12:34.295598   61923 main.go:141] libmachine: (auto-729010) Calling .Create
	I0729 18:12:34.295744   61923 main.go:141] libmachine: (auto-729010) Creating KVM machine...
	I0729 18:12:34.296882   61923 main.go:141] libmachine: (auto-729010) DBG | found existing default KVM network
	I0729 18:12:34.297918   61923 main.go:141] libmachine: (auto-729010) DBG | I0729 18:12:34.297771   62125 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c7:d9:b6} reservation:<nil>}
	I0729 18:12:34.299099   61923 main.go:141] libmachine: (auto-729010) DBG | I0729 18:12:34.299024   62125 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010fcf0}
	I0729 18:12:34.299150   61923 main.go:141] libmachine: (auto-729010) DBG | created network xml: 
	I0729 18:12:34.299169   61923 main.go:141] libmachine: (auto-729010) DBG | <network>
	I0729 18:12:34.299183   61923 main.go:141] libmachine: (auto-729010) DBG |   <name>mk-auto-729010</name>
	I0729 18:12:34.299199   61923 main.go:141] libmachine: (auto-729010) DBG |   <dns enable='no'/>
	I0729 18:12:34.299209   61923 main.go:141] libmachine: (auto-729010) DBG |   
	I0729 18:12:34.299220   61923 main.go:141] libmachine: (auto-729010) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0729 18:12:34.299234   61923 main.go:141] libmachine: (auto-729010) DBG |     <dhcp>
	I0729 18:12:34.299243   61923 main.go:141] libmachine: (auto-729010) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0729 18:12:34.299266   61923 main.go:141] libmachine: (auto-729010) DBG |     </dhcp>
	I0729 18:12:34.299281   61923 main.go:141] libmachine: (auto-729010) DBG |   </ip>
	I0729 18:12:34.299306   61923 main.go:141] libmachine: (auto-729010) DBG |   
	I0729 18:12:34.299326   61923 main.go:141] libmachine: (auto-729010) DBG | </network>
	I0729 18:12:34.299340   61923 main.go:141] libmachine: (auto-729010) DBG | 
	I0729 18:12:34.304562   61923 main.go:141] libmachine: (auto-729010) DBG | trying to create private KVM network mk-auto-729010 192.168.50.0/24...
	I0729 18:12:34.371644   61923 main.go:141] libmachine: (auto-729010) DBG | private KVM network mk-auto-729010 192.168.50.0/24 created
	I0729 18:12:34.371700   61923 main.go:141] libmachine: (auto-729010) DBG | I0729 18:12:34.371598   62125 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:12:34.371727   61923 main.go:141] libmachine: (auto-729010) Setting up store path in /home/jenkins/minikube-integration/19345-11206/.minikube/machines/auto-729010 ...
	I0729 18:12:34.371750   61923 main.go:141] libmachine: (auto-729010) Building disk image from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 18:12:34.371764   61923 main.go:141] libmachine: (auto-729010) Downloading /home/jenkins/minikube-integration/19345-11206/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 18:12:34.619242   61923 main.go:141] libmachine: (auto-729010) DBG | I0729 18:12:34.619103   62125 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/auto-729010/id_rsa...
	I0729 18:12:34.896080   61923 main.go:141] libmachine: (auto-729010) DBG | I0729 18:12:34.895957   62125 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/auto-729010/auto-729010.rawdisk...
	I0729 18:12:34.896141   61923 main.go:141] libmachine: (auto-729010) DBG | Writing magic tar header
	I0729 18:12:34.896157   61923 main.go:141] libmachine: (auto-729010) DBG | Writing SSH key tar header
	I0729 18:12:34.896169   61923 main.go:141] libmachine: (auto-729010) DBG | I0729 18:12:34.896114   62125 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/auto-729010 ...
	I0729 18:12:34.896300   61923 main.go:141] libmachine: (auto-729010) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/auto-729010
	I0729 18:12:34.896331   61923 main.go:141] libmachine: (auto-729010) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/auto-729010 (perms=drwx------)
	I0729 18:12:34.896342   61923 main.go:141] libmachine: (auto-729010) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines
	I0729 18:12:34.896357   61923 main.go:141] libmachine: (auto-729010) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:12:34.896383   61923 main.go:141] libmachine: (auto-729010) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206
	I0729 18:12:34.896399   61923 main.go:141] libmachine: (auto-729010) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:12:34.896409   61923 main.go:141] libmachine: (auto-729010) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:12:34.896421   61923 main.go:141] libmachine: (auto-729010) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:12:34.896433   61923 main.go:141] libmachine: (auto-729010) DBG | Checking permissions on dir: /home
	I0729 18:12:34.896444   61923 main.go:141] libmachine: (auto-729010) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube (perms=drwxr-xr-x)
	I0729 18:12:34.896459   61923 main.go:141] libmachine: (auto-729010) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206 (perms=drwxrwxr-x)
	I0729 18:12:34.896473   61923 main.go:141] libmachine: (auto-729010) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:12:34.896485   61923 main.go:141] libmachine: (auto-729010) DBG | Skipping /home - not owner
	I0729 18:12:34.896513   61923 main.go:141] libmachine: (auto-729010) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:12:34.896542   61923 main.go:141] libmachine: (auto-729010) Creating domain...
	I0729 18:12:34.897471   61923 main.go:141] libmachine: (auto-729010) define libvirt domain using xml: 
	I0729 18:12:34.897494   61923 main.go:141] libmachine: (auto-729010) <domain type='kvm'>
	I0729 18:12:34.897505   61923 main.go:141] libmachine: (auto-729010)   <name>auto-729010</name>
	I0729 18:12:34.897513   61923 main.go:141] libmachine: (auto-729010)   <memory unit='MiB'>3072</memory>
	I0729 18:12:34.897524   61923 main.go:141] libmachine: (auto-729010)   <vcpu>2</vcpu>
	I0729 18:12:34.897533   61923 main.go:141] libmachine: (auto-729010)   <features>
	I0729 18:12:34.897542   61923 main.go:141] libmachine: (auto-729010)     <acpi/>
	I0729 18:12:34.897549   61923 main.go:141] libmachine: (auto-729010)     <apic/>
	I0729 18:12:34.897569   61923 main.go:141] libmachine: (auto-729010)     <pae/>
	I0729 18:12:34.897583   61923 main.go:141] libmachine: (auto-729010)     
	I0729 18:12:34.897595   61923 main.go:141] libmachine: (auto-729010)   </features>
	I0729 18:12:34.897606   61923 main.go:141] libmachine: (auto-729010)   <cpu mode='host-passthrough'>
	I0729 18:12:34.897614   61923 main.go:141] libmachine: (auto-729010)   
	I0729 18:12:34.897624   61923 main.go:141] libmachine: (auto-729010)   </cpu>
	I0729 18:12:34.897631   61923 main.go:141] libmachine: (auto-729010)   <os>
	I0729 18:12:34.897642   61923 main.go:141] libmachine: (auto-729010)     <type>hvm</type>
	I0729 18:12:34.897654   61923 main.go:141] libmachine: (auto-729010)     <boot dev='cdrom'/>
	I0729 18:12:34.897663   61923 main.go:141] libmachine: (auto-729010)     <boot dev='hd'/>
	I0729 18:12:34.897672   61923 main.go:141] libmachine: (auto-729010)     <bootmenu enable='no'/>
	I0729 18:12:34.897682   61923 main.go:141] libmachine: (auto-729010)   </os>
	I0729 18:12:34.897691   61923 main.go:141] libmachine: (auto-729010)   <devices>
	I0729 18:12:34.897713   61923 main.go:141] libmachine: (auto-729010)     <disk type='file' device='cdrom'>
	I0729 18:12:34.897738   61923 main.go:141] libmachine: (auto-729010)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/auto-729010/boot2docker.iso'/>
	I0729 18:12:34.897755   61923 main.go:141] libmachine: (auto-729010)       <target dev='hdc' bus='scsi'/>
	I0729 18:12:34.897782   61923 main.go:141] libmachine: (auto-729010)       <readonly/>
	I0729 18:12:34.897813   61923 main.go:141] libmachine: (auto-729010)     </disk>
	I0729 18:12:34.897842   61923 main.go:141] libmachine: (auto-729010)     <disk type='file' device='disk'>
	I0729 18:12:34.897855   61923 main.go:141] libmachine: (auto-729010)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:12:34.897880   61923 main.go:141] libmachine: (auto-729010)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/auto-729010/auto-729010.rawdisk'/>
	I0729 18:12:34.897908   61923 main.go:141] libmachine: (auto-729010)       <target dev='hda' bus='virtio'/>
	I0729 18:12:34.897921   61923 main.go:141] libmachine: (auto-729010)     </disk>
	I0729 18:12:34.897932   61923 main.go:141] libmachine: (auto-729010)     <interface type='network'>
	I0729 18:12:34.897943   61923 main.go:141] libmachine: (auto-729010)       <source network='mk-auto-729010'/>
	I0729 18:12:34.897954   61923 main.go:141] libmachine: (auto-729010)       <model type='virtio'/>
	I0729 18:12:34.897961   61923 main.go:141] libmachine: (auto-729010)     </interface>
	I0729 18:12:34.897970   61923 main.go:141] libmachine: (auto-729010)     <interface type='network'>
	I0729 18:12:34.897978   61923 main.go:141] libmachine: (auto-729010)       <source network='default'/>
	I0729 18:12:34.898003   61923 main.go:141] libmachine: (auto-729010)       <model type='virtio'/>
	I0729 18:12:34.898015   61923 main.go:141] libmachine: (auto-729010)     </interface>
	I0729 18:12:34.898025   61923 main.go:141] libmachine: (auto-729010)     <serial type='pty'>
	I0729 18:12:34.898033   61923 main.go:141] libmachine: (auto-729010)       <target port='0'/>
	I0729 18:12:34.898043   61923 main.go:141] libmachine: (auto-729010)     </serial>
	I0729 18:12:34.898051   61923 main.go:141] libmachine: (auto-729010)     <console type='pty'>
	I0729 18:12:34.898059   61923 main.go:141] libmachine: (auto-729010)       <target type='serial' port='0'/>
	I0729 18:12:34.898069   61923 main.go:141] libmachine: (auto-729010)     </console>
	I0729 18:12:34.898085   61923 main.go:141] libmachine: (auto-729010)     <rng model='virtio'>
	I0729 18:12:34.898097   61923 main.go:141] libmachine: (auto-729010)       <backend model='random'>/dev/random</backend>
	I0729 18:12:34.898107   61923 main.go:141] libmachine: (auto-729010)     </rng>
	I0729 18:12:34.898115   61923 main.go:141] libmachine: (auto-729010)     
	I0729 18:12:34.898134   61923 main.go:141] libmachine: (auto-729010)     
	I0729 18:12:34.898142   61923 main.go:141] libmachine: (auto-729010)   </devices>
	I0729 18:12:34.898146   61923 main.go:141] libmachine: (auto-729010) </domain>
	I0729 18:12:34.898156   61923 main.go:141] libmachine: (auto-729010) 
	I0729 18:12:34.902669   61923 main.go:141] libmachine: (auto-729010) DBG | domain auto-729010 has defined MAC address 52:54:00:35:c2:1d in network default
	I0729 18:12:34.903238   61923 main.go:141] libmachine: (auto-729010) Ensuring networks are active...
	I0729 18:12:34.903258   61923 main.go:141] libmachine: (auto-729010) DBG | domain auto-729010 has defined MAC address 52:54:00:44:47:7a in network mk-auto-729010
	I0729 18:12:34.903939   61923 main.go:141] libmachine: (auto-729010) Ensuring network default is active
	I0729 18:12:34.904277   61923 main.go:141] libmachine: (auto-729010) Ensuring network mk-auto-729010 is active
	I0729 18:12:34.904764   61923 main.go:141] libmachine: (auto-729010) Getting domain xml...
	I0729 18:12:34.905424   61923 main.go:141] libmachine: (auto-729010) Creating domain...
	I0729 18:12:36.170644   61923 main.go:141] libmachine: (auto-729010) Waiting to get IP...
	I0729 18:12:36.171398   61923 main.go:141] libmachine: (auto-729010) DBG | domain auto-729010 has defined MAC address 52:54:00:44:47:7a in network mk-auto-729010
	I0729 18:12:36.171816   61923 main.go:141] libmachine: (auto-729010) DBG | unable to find current IP address of domain auto-729010 in network mk-auto-729010
	I0729 18:12:36.171847   61923 main.go:141] libmachine: (auto-729010) DBG | I0729 18:12:36.171792   62125 retry.go:31] will retry after 273.586672ms: waiting for machine to come up
	I0729 18:12:36.447497   61923 main.go:141] libmachine: (auto-729010) DBG | domain auto-729010 has defined MAC address 52:54:00:44:47:7a in network mk-auto-729010
	I0729 18:12:36.448033   61923 main.go:141] libmachine: (auto-729010) DBG | unable to find current IP address of domain auto-729010 in network mk-auto-729010
	I0729 18:12:36.448060   61923 main.go:141] libmachine: (auto-729010) DBG | I0729 18:12:36.448003   62125 retry.go:31] will retry after 296.540152ms: waiting for machine to come up
	I0729 18:12:36.746743   61923 main.go:141] libmachine: (auto-729010) DBG | domain auto-729010 has defined MAC address 52:54:00:44:47:7a in network mk-auto-729010
	I0729 18:12:36.747325   61923 main.go:141] libmachine: (auto-729010) DBG | unable to find current IP address of domain auto-729010 in network mk-auto-729010
	I0729 18:12:36.747348   61923 main.go:141] libmachine: (auto-729010) DBG | I0729 18:12:36.747289   62125 retry.go:31] will retry after 311.687498ms: waiting for machine to come up
	I0729 18:12:37.060776   61923 main.go:141] libmachine: (auto-729010) DBG | domain auto-729010 has defined MAC address 52:54:00:44:47:7a in network mk-auto-729010
	I0729 18:12:37.061352   61923 main.go:141] libmachine: (auto-729010) DBG | unable to find current IP address of domain auto-729010 in network mk-auto-729010
	I0729 18:12:37.061391   61923 main.go:141] libmachine: (auto-729010) DBG | I0729 18:12:37.061303   62125 retry.go:31] will retry after 541.46367ms: waiting for machine to come up
	I0729 18:12:37.604615   61923 main.go:141] libmachine: (auto-729010) DBG | domain auto-729010 has defined MAC address 52:54:00:44:47:7a in network mk-auto-729010
	I0729 18:12:37.605073   61923 main.go:141] libmachine: (auto-729010) DBG | unable to find current IP address of domain auto-729010 in network mk-auto-729010
	I0729 18:12:37.605101   61923 main.go:141] libmachine: (auto-729010) DBG | I0729 18:12:37.605023   62125 retry.go:31] will retry after 716.356114ms: waiting for machine to come up
	I0729 18:12:37.091471   61512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:12:37.197561   61512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:12:37.360711   61512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:12:37.499381   61512 ssh_runner.go:195] Run: openssl version
	I0729 18:12:37.562388   61512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:12:37.749003   61512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:12:37.770223   61512 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:12:37.770289   61512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:12:37.831877   61512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:12:37.991614   61512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:12:38.136400   61512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:12:38.174458   61512 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:12:38.174527   61512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:12:38.185307   61512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:12:38.233499   61512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:12:38.261467   61512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:12:38.270413   61512 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:12:38.270469   61512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:12:38.284696   61512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:12:38.314382   61512 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:12:38.352438   61512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:12:38.367186   61512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:12:38.381730   61512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:12:38.390640   61512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:12:38.397135   61512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:12:38.408599   61512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:12:38.416923   61512 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-372591 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-372591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:12:38.417010   61512 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:12:38.417074   61512 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:12:38.499693   61512 cri.go:89] found id: "2fe18b0df01d37ebc5b532fc3662d194eec78672248d240caf7e6a8ab44774a5"
	I0729 18:12:38.499715   61512 cri.go:89] found id: "c5dace301b8f4c615e9a99a0206497609a3f92dcff97f5e063dddcaa90839fc4"
	I0729 18:12:38.499720   61512 cri.go:89] found id: "02ec59ec5ad683ab865f43b358daadaf3cf9b0c5f5781b801f2b19ded4d15629"
	I0729 18:12:38.499725   61512 cri.go:89] found id: "595c38d05ab43783ead8c77666375934a9ed71f6cce133f1347b0b403fce2949"
	I0729 18:12:38.499729   61512 cri.go:89] found id: "c2836ce6e749f3a44e7fc0d81af6b14f3f9e55c362b1ddbc0e3383313794a2c5"
	I0729 18:12:38.499734   61512 cri.go:89] found id: "2e4f6e9dddba3390d161c8c0a7ddbe98f65cc03b0d15b26ba8f0aac9832ef16e"
	I0729 18:12:38.499738   61512 cri.go:89] found id: "bb015621b2cb010eec586b1e4fa8cb9670188fb811294fb390101079698628f5"
	I0729 18:12:38.499742   61512 cri.go:89] found id: "421dc2b428779aa047f0ee87ebdc8a30e998a732708f5ae0476e771c9e8ad7a5"
	I0729 18:12:38.499746   61512 cri.go:89] found id: "8fe7ba1d5b88ef99944cbb6e290ef52f5a59103abbe3591a58ed4f319960df37"
	I0729 18:12:38.499755   61512 cri.go:89] found id: "4efecc3f7d497573dfca62d92c634955f25b35066c857d47a2e93c42a09da615"
	I0729 18:12:38.499760   61512 cri.go:89] found id: "2a115f435d9dae37ff5877a5a8b1946537fba8861abfddadec9a164d1d8cc7fe"
	I0729 18:12:38.499764   61512 cri.go:89] found id: "6a9e145c827401f958b085314c1d3dec2e810df7cec13b80d15a602589ffb19a"
	I0729 18:12:38.499769   61512 cri.go:89] found id: ""
	I0729 18:12:38.499814   61512 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.322814814Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=af8cf337-b385-4f36-8548-46f2b99eefe0 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.326652775Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be33c9b1-ffba-46cf-96d1-329aa5f39812 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.328706170Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722276781328671290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be33c9b1-ffba-46cf-96d1-329aa5f39812 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.331913845Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7583b09-f799-4e48-b1d4-56c32de4e309 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.332068610Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7583b09-f799-4e48-b1d4-56c32de4e309 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.332366197Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19283fd5d6633efa7156084c7eb70cfe6a7d9b0009edd64ae2dbad4c33a4cc0e,PodSandboxId:96c48adf35702a2da2b7359ebd5d1e195f8623253e1fa62d10cff4408b2b6994,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276778510271323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-8qpq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efa7c44-7be4-48c2-826d-6e97215e729b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd976d2cb71fbfbf2cae7f0d82d11362761275f462c4b927341d0f688e47b6a3,PodSandboxId:877d299bf2e2f7d028effffceb115fc7462be4796da6dbac3c396b46112cd49a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722276778477223345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 97e28563-2221-46ec-84a6-a351456ccde4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cf33b6ddf42b20cac9805b2e1bfd434576b594d6737ebe13b386c459a067fb,PodSandboxId:9c8a35e275245e9e49a3c278fc7ebdc6e90fb8f358348edb30b5a2c3259a4372,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722276773643831557,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 0234254a7269f5bb4d2c43a5e27d1da7,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ee469a94495a1b93b59eb9234d3dbe59dcdc6a9c47f1ee7d67ac841d1ecc47,PodSandboxId:2cf7de48012bc0bd76c022290d39f44346ab48dc308ba03c8a2b7592ee1bf0c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722276773637862744,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c81fea3f1e8d8b5837d8fd5e0fbf3179,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50e157ab37db5c68192e7b2b7e3076313c185dbd66a655479fbb2e7734b5e867,PodSandboxId:727cc0302f07873cbccb55fd55fe2f362933ee30f161847cc1100a8640547207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722276773629250586,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 39d53f327d13e2e2f8364bb45a65d4e3,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e7b48b62dee585586f5a253e0ca6994cc5768b2bd7aa221be1ed155b447320,PodSandboxId:6144f2536596a7ebad79ade018215c80cf0e7e3a3f1df76a9a726847036938ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722276773616176342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: e5e57568f67623875ef0fb6c47810dbe,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b420d47749a9f4b9fe460adc3e89d5e74ff89b8d5a84d1a59b6eab04f91fb633,PodSandboxId:0d9e2f097ab38ac68207df4110ffed3ceed7a3d07d8708a6c59722f7e33d7882,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722276757712970405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g8xmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c9abcf-0679-40c2-
90f5-be93f0cc0fc0,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1d782dc7a3a659e4dce24410747b1b31dfd6747604bc674f1cf178e34c299b6,PodSandboxId:877d299bf2e2f7d028effffceb115fc7462be4796da6dbac3c396b46112cd49a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722276757696135989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e28563-2221-46ec-84a6-a351456c
cde4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38fb4ef4663f1ac13ccfda6a8fb8a90f54f2bb868bfac2903a5892efff31441a,PodSandboxId:8094ca34d057f6f5fa2858a056427da883b7af85f3050dc561e18a4cb48d7609,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276758659383962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jd528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727eab3-b9c8-46fa-8ea2-c86753672581,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1400471e3bb43ed29b6ec9dcbb7180775535a6f2c6d76bc7a98c2cb5d7f9b7c7,PodSandboxId:96c48adf35702a2da2b7359ebd5d1e195f8623253e1fa62d10cff4408b2b6994,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722276758337857321,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-8qpq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efa7c44-7be4-48c2-826d-6e97215e729b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe18b0df01d37ebc5b532fc3662d194eec78672248d240caf7e6a8ab44774a5,PodSandboxId:2cf7de48012bc0bd76c022290d39f44346ab48dc308ba03c8a2b7592ee1bf0c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722276757581620070,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c81fea3f1e8d8b5837d8fd5e0fbf3179,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dace301b8f4c615e9a99a0206497609a3f92dcff97f5e063dddcaa90839fc4,PodSandboxId:727cc0302f07873cbccb55fd55fe2f362933ee30f161847cc1100a8640547207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722276757492273030,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39d53f327d13e2e2f8364bb45a65d4e3,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ec59ec5ad683ab865f43b358daadaf3cf9b0c5f5781b801f2b19ded4d15629,PodSandboxId:9c8a35e275245e9e49a3c278fc7ebdc6e90fb8f358348edb30b5a2c3259a4372,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722276757344824503,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0234254a7269f5bb4d2c43a5e27d1da7,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595c38d05ab43783ead8c77666375934a9ed71f6cce133f1347b0b403fce2949,PodSandboxId:6144f2536596a7ebad79ade018215c80cf0e7e3a3f1df76a9a726847036938ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722276757221147060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5e57568f67623875ef0fb6c47810dbe,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e4f6e9dddba3390d161c8c0a7ddbe98f65cc03b0d15b26ba8f0aac9832ef16e,PodSandboxId:76204356bf06e4b6bf88205a7a4fbe15eeed7c6a0cf982a64d30047393dc6980,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722276721523327280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jd528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727eab3-b9c8-46fa-8ea2-c86753672581,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421dc2b428779aa047f0ee87ebdc8a30e998a732708f5ae0476e771c9e8ad7a5,PodSandboxId:a1406241dc60c403f3f70df69c1308a5a8d3e43533f8360aaf8ae5273755cac4,Metadata:&ContainerMeta
data{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722276719629664928,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g8xmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c9abcf-0679-40c2-90f5-be93f0cc0fc0,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7583b09-f799-4e48-b1d4-56c32de4e309 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.359285697Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c7412c51-935a-4139-857c-eb7c56d87c1d name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.360092022Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8094ca34d057f6f5fa2858a056427da883b7af85f3050dc561e18a4cb48d7609,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-jd528,Uid:1727eab3-b9c8-46fa-8ea2-c86753672581,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722276757439357742,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-jd528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727eab3-b9c8-46fa-8ea2-c86753672581,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:12:00.937367429Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:96c48adf35702a2da2b7359ebd5d1e195f8623253e1fa62d10cff4408b2b6994,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-8qpq6,Uid:9efa7c44-7be4-48c2-826d-6e97215e729b,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722276757411461548,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-8qpq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efa7c44-7be4-48c2-826d-6e97215e729b,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:12:00.944999443Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:727cc0302f07873cbccb55fd55fe2f362933ee30f161847cc1100a8640547207,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-372591,Uid:39d53f327d13e2e2f8364bb45a65d4e3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722276757112813225,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39d53f327d13e2e2f8364bb45a65d4e
3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 39d53f327d13e2e2f8364bb45a65d4e3,kubernetes.io/config.seen: 2024-07-29T18:11:46.604857795Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2cf7de48012bc0bd76c022290d39f44346ab48dc308ba03c8a2b7592ee1bf0c5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-372591,Uid:c81fea3f1e8d8b5837d8fd5e0fbf3179,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722276757065657563,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c81fea3f1e8d8b5837d8fd5e0fbf3179,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c81fea3f1e8d8b5837d8fd5e0fbf3179,kubernetes.io/config.seen: 2024-07-29T18:11:46.604858935Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0d9e2f097ab38ac68207df4110ffed3ceed7
a3d07d8708a6c59722f7e33d7882,Metadata:&PodSandboxMetadata{Name:kube-proxy-g8xmr,Uid:90c9abcf-0679-40c2-90f5-be93f0cc0fc0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722276756978240482,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g8xmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c9abcf-0679-40c2-90f5-be93f0cc0fc0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:11:59.223649971Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:877d299bf2e2f7d028effffceb115fc7462be4796da6dbac3c396b46112cd49a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:97e28563-2221-46ec-84a6-a351456ccde4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722276756961564617,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.na
me: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e28563-2221-46ec-84a6-a351456ccde4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T18:11:59.385847569Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6
144f2536596a7ebad79ade018215c80cf0e7e3a3f1df76a9a726847036938ad,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-372591,Uid:e5e57568f67623875ef0fb6c47810dbe,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722276756917925016,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5e57568f67623875ef0fb6c47810dbe,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.171:8443,kubernetes.io/config.hash: e5e57568f67623875ef0fb6c47810dbe,kubernetes.io/config.seen: 2024-07-29T18:11:46.604856669Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9c8a35e275245e9e49a3c278fc7ebdc6e90fb8f358348edb30b5a2c3259a4372,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-372591,Uid:0234254a7269f5bb4d2c43a5e27d1da7,Namespace:kube-system,Atte
mpt:1,},State:SANDBOX_READY,CreatedAt:1722276756907629027,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0234254a7269f5bb4d2c43a5e27d1da7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.171:2379,kubernetes.io/config.hash: 0234254a7269f5bb4d2c43a5e27d1da7,kubernetes.io/config.seen: 2024-07-29T18:11:46.604853025Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:76204356bf06e4b6bf88205a7a4fbe15eeed7c6a0cf982a64d30047393dc6980,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-jd528,Uid:1727eab3-b9c8-46fa-8ea2-c86753672581,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722276721244401207,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-jd528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 1727eab3-b9c8-46fa-8ea2-c86753672581,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:12:00.937367429Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a1406241dc60c403f3f70df69c1308a5a8d3e43533f8360aaf8ae5273755cac4,Metadata:&PodSandboxMetadata{Name:kube-proxy-g8xmr,Uid:90c9abcf-0679-40c2-90f5-be93f0cc0fc0,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722276719531299692,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g8xmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c9abcf-0679-40c2-90f5-be93f0cc0fc0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:11:59.223649971Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c7412c51-935a-4139-857c-eb7c56d87c1d name=/runtime.v1.Runt
imeService/ListPodSandbox
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.361994911Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e8ef51e-251e-4855-a192-3ce283f38aad name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.362094819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e8ef51e-251e-4855-a192-3ce283f38aad name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.362541455Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19283fd5d6633efa7156084c7eb70cfe6a7d9b0009edd64ae2dbad4c33a4cc0e,PodSandboxId:96c48adf35702a2da2b7359ebd5d1e195f8623253e1fa62d10cff4408b2b6994,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276778510271323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-8qpq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efa7c44-7be4-48c2-826d-6e97215e729b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd976d2cb71fbfbf2cae7f0d82d11362761275f462c4b927341d0f688e47b6a3,PodSandboxId:877d299bf2e2f7d028effffceb115fc7462be4796da6dbac3c396b46112cd49a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722276778477223345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 97e28563-2221-46ec-84a6-a351456ccde4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cf33b6ddf42b20cac9805b2e1bfd434576b594d6737ebe13b386c459a067fb,PodSandboxId:9c8a35e275245e9e49a3c278fc7ebdc6e90fb8f358348edb30b5a2c3259a4372,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722276773643831557,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 0234254a7269f5bb4d2c43a5e27d1da7,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ee469a94495a1b93b59eb9234d3dbe59dcdc6a9c47f1ee7d67ac841d1ecc47,PodSandboxId:2cf7de48012bc0bd76c022290d39f44346ab48dc308ba03c8a2b7592ee1bf0c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722276773637862744,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c81fea3f1e8d8b5837d8fd5e0fbf3179,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50e157ab37db5c68192e7b2b7e3076313c185dbd66a655479fbb2e7734b5e867,PodSandboxId:727cc0302f07873cbccb55fd55fe2f362933ee30f161847cc1100a8640547207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722276773629250586,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 39d53f327d13e2e2f8364bb45a65d4e3,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e7b48b62dee585586f5a253e0ca6994cc5768b2bd7aa221be1ed155b447320,PodSandboxId:6144f2536596a7ebad79ade018215c80cf0e7e3a3f1df76a9a726847036938ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722276773616176342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: e5e57568f67623875ef0fb6c47810dbe,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b420d47749a9f4b9fe460adc3e89d5e74ff89b8d5a84d1a59b6eab04f91fb633,PodSandboxId:0d9e2f097ab38ac68207df4110ffed3ceed7a3d07d8708a6c59722f7e33d7882,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722276757712970405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g8xmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c9abcf-0679-40c2-
90f5-be93f0cc0fc0,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1d782dc7a3a659e4dce24410747b1b31dfd6747604bc674f1cf178e34c299b6,PodSandboxId:877d299bf2e2f7d028effffceb115fc7462be4796da6dbac3c396b46112cd49a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722276757696135989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e28563-2221-46ec-84a6-a351456c
cde4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38fb4ef4663f1ac13ccfda6a8fb8a90f54f2bb868bfac2903a5892efff31441a,PodSandboxId:8094ca34d057f6f5fa2858a056427da883b7af85f3050dc561e18a4cb48d7609,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276758659383962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jd528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727eab3-b9c8-46fa-8ea2-c86753672581,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1400471e3bb43ed29b6ec9dcbb7180775535a6f2c6d76bc7a98c2cb5d7f9b7c7,PodSandboxId:96c48adf35702a2da2b7359ebd5d1e195f8623253e1fa62d10cff4408b2b6994,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722276758337857321,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-8qpq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efa7c44-7be4-48c2-826d-6e97215e729b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe18b0df01d37ebc5b532fc3662d194eec78672248d240caf7e6a8ab44774a5,PodSandboxId:2cf7de48012bc0bd76c022290d39f44346ab48dc308ba03c8a2b7592ee1bf0c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722276757581620070,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c81fea3f1e8d8b5837d8fd5e0fbf3179,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dace301b8f4c615e9a99a0206497609a3f92dcff97f5e063dddcaa90839fc4,PodSandboxId:727cc0302f07873cbccb55fd55fe2f362933ee30f161847cc1100a8640547207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722276757492273030,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39d53f327d13e2e2f8364bb45a65d4e3,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ec59ec5ad683ab865f43b358daadaf3cf9b0c5f5781b801f2b19ded4d15629,PodSandboxId:9c8a35e275245e9e49a3c278fc7ebdc6e90fb8f358348edb30b5a2c3259a4372,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722276757344824503,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0234254a7269f5bb4d2c43a5e27d1da7,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595c38d05ab43783ead8c77666375934a9ed71f6cce133f1347b0b403fce2949,PodSandboxId:6144f2536596a7ebad79ade018215c80cf0e7e3a3f1df76a9a726847036938ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722276757221147060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5e57568f67623875ef0fb6c47810dbe,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e4f6e9dddba3390d161c8c0a7ddbe98f65cc03b0d15b26ba8f0aac9832ef16e,PodSandboxId:76204356bf06e4b6bf88205a7a4fbe15eeed7c6a0cf982a64d30047393dc6980,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722276721523327280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jd528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727eab3-b9c8-46fa-8ea2-c86753672581,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421dc2b428779aa047f0ee87ebdc8a30e998a732708f5ae0476e771c9e8ad7a5,PodSandboxId:a1406241dc60c403f3f70df69c1308a5a8d3e43533f8360aaf8ae5273755cac4,Metadata:&ContainerMeta
data{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722276719629664928,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g8xmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c9abcf-0679-40c2-90f5-be93f0cc0fc0,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e8ef51e-251e-4855-a192-3ce283f38aad name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.392031502Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51b24a0b-7ad5-4fe8-810b-522c3f49406a name=/runtime.v1.RuntimeService/Version
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.392104387Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51b24a0b-7ad5-4fe8-810b-522c3f49406a name=/runtime.v1.RuntimeService/Version
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.393212163Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bcb9a079-c6b1-4e4b-aa90-f7e754ce504e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.393553974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722276781393533691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bcb9a079-c6b1-4e4b-aa90-f7e754ce504e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.394067837Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a621a6a-8464-450c-a210-e6c98d6506fb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.394118877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a621a6a-8464-450c-a210-e6c98d6506fb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.394418492Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19283fd5d6633efa7156084c7eb70cfe6a7d9b0009edd64ae2dbad4c33a4cc0e,PodSandboxId:96c48adf35702a2da2b7359ebd5d1e195f8623253e1fa62d10cff4408b2b6994,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276778510271323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-8qpq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efa7c44-7be4-48c2-826d-6e97215e729b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd976d2cb71fbfbf2cae7f0d82d11362761275f462c4b927341d0f688e47b6a3,PodSandboxId:877d299bf2e2f7d028effffceb115fc7462be4796da6dbac3c396b46112cd49a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722276778477223345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 97e28563-2221-46ec-84a6-a351456ccde4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cf33b6ddf42b20cac9805b2e1bfd434576b594d6737ebe13b386c459a067fb,PodSandboxId:9c8a35e275245e9e49a3c278fc7ebdc6e90fb8f358348edb30b5a2c3259a4372,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722276773643831557,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 0234254a7269f5bb4d2c43a5e27d1da7,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ee469a94495a1b93b59eb9234d3dbe59dcdc6a9c47f1ee7d67ac841d1ecc47,PodSandboxId:2cf7de48012bc0bd76c022290d39f44346ab48dc308ba03c8a2b7592ee1bf0c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722276773637862744,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c81fea3f1e8d8b5837d8fd5e0fbf3179,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50e157ab37db5c68192e7b2b7e3076313c185dbd66a655479fbb2e7734b5e867,PodSandboxId:727cc0302f07873cbccb55fd55fe2f362933ee30f161847cc1100a8640547207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722276773629250586,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 39d53f327d13e2e2f8364bb45a65d4e3,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e7b48b62dee585586f5a253e0ca6994cc5768b2bd7aa221be1ed155b447320,PodSandboxId:6144f2536596a7ebad79ade018215c80cf0e7e3a3f1df76a9a726847036938ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722276773616176342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: e5e57568f67623875ef0fb6c47810dbe,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b420d47749a9f4b9fe460adc3e89d5e74ff89b8d5a84d1a59b6eab04f91fb633,PodSandboxId:0d9e2f097ab38ac68207df4110ffed3ceed7a3d07d8708a6c59722f7e33d7882,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722276757712970405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g8xmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c9abcf-0679-40c2-
90f5-be93f0cc0fc0,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1d782dc7a3a659e4dce24410747b1b31dfd6747604bc674f1cf178e34c299b6,PodSandboxId:877d299bf2e2f7d028effffceb115fc7462be4796da6dbac3c396b46112cd49a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722276757696135989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e28563-2221-46ec-84a6-a351456c
cde4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38fb4ef4663f1ac13ccfda6a8fb8a90f54f2bb868bfac2903a5892efff31441a,PodSandboxId:8094ca34d057f6f5fa2858a056427da883b7af85f3050dc561e18a4cb48d7609,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276758659383962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jd528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727eab3-b9c8-46fa-8ea2-c86753672581,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1400471e3bb43ed29b6ec9dcbb7180775535a6f2c6d76bc7a98c2cb5d7f9b7c7,PodSandboxId:96c48adf35702a2da2b7359ebd5d1e195f8623253e1fa62d10cff4408b2b6994,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722276758337857321,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-8qpq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efa7c44-7be4-48c2-826d-6e97215e729b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe18b0df01d37ebc5b532fc3662d194eec78672248d240caf7e6a8ab44774a5,PodSandboxId:2cf7de48012bc0bd76c022290d39f44346ab48dc308ba03c8a2b7592ee1bf0c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722276757581620070,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c81fea3f1e8d8b5837d8fd5e0fbf3179,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dace301b8f4c615e9a99a0206497609a3f92dcff97f5e063dddcaa90839fc4,PodSandboxId:727cc0302f07873cbccb55fd55fe2f362933ee30f161847cc1100a8640547207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722276757492273030,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39d53f327d13e2e2f8364bb45a65d4e3,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ec59ec5ad683ab865f43b358daadaf3cf9b0c5f5781b801f2b19ded4d15629,PodSandboxId:9c8a35e275245e9e49a3c278fc7ebdc6e90fb8f358348edb30b5a2c3259a4372,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722276757344824503,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0234254a7269f5bb4d2c43a5e27d1da7,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595c38d05ab43783ead8c77666375934a9ed71f6cce133f1347b0b403fce2949,PodSandboxId:6144f2536596a7ebad79ade018215c80cf0e7e3a3f1df76a9a726847036938ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722276757221147060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5e57568f67623875ef0fb6c47810dbe,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e4f6e9dddba3390d161c8c0a7ddbe98f65cc03b0d15b26ba8f0aac9832ef16e,PodSandboxId:76204356bf06e4b6bf88205a7a4fbe15eeed7c6a0cf982a64d30047393dc6980,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722276721523327280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jd528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727eab3-b9c8-46fa-8ea2-c86753672581,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421dc2b428779aa047f0ee87ebdc8a30e998a732708f5ae0476e771c9e8ad7a5,PodSandboxId:a1406241dc60c403f3f70df69c1308a5a8d3e43533f8360aaf8ae5273755cac4,Metadata:&ContainerMeta
data{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722276719629664928,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g8xmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c9abcf-0679-40c2-90f5-be93f0cc0fc0,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a621a6a-8464-450c-a210-e6c98d6506fb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.436213608Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b7bf81d-8d48-46d3-8268-d2806dafb216 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.436334158Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b7bf81d-8d48-46d3-8268-d2806dafb216 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.438298169Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96e394f9-95d9-40ca-9e25-9b01f8e5610d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.438978259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722276781438944042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96e394f9-95d9-40ca-9e25-9b01f8e5610d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.439642130Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75a4e619-fe28-428c-b271-b90e9cb4e0c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.439799357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75a4e619-fe28-428c-b271-b90e9cb4e0c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:13:01 kubernetes-upgrade-372591 crio[2302]: time="2024-07-29 18:13:01.440342900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19283fd5d6633efa7156084c7eb70cfe6a7d9b0009edd64ae2dbad4c33a4cc0e,PodSandboxId:96c48adf35702a2da2b7359ebd5d1e195f8623253e1fa62d10cff4408b2b6994,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276778510271323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-8qpq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efa7c44-7be4-48c2-826d-6e97215e729b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd976d2cb71fbfbf2cae7f0d82d11362761275f462c4b927341d0f688e47b6a3,PodSandboxId:877d299bf2e2f7d028effffceb115fc7462be4796da6dbac3c396b46112cd49a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722276778477223345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 97e28563-2221-46ec-84a6-a351456ccde4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cf33b6ddf42b20cac9805b2e1bfd434576b594d6737ebe13b386c459a067fb,PodSandboxId:9c8a35e275245e9e49a3c278fc7ebdc6e90fb8f358348edb30b5a2c3259a4372,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722276773643831557,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 0234254a7269f5bb4d2c43a5e27d1da7,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ee469a94495a1b93b59eb9234d3dbe59dcdc6a9c47f1ee7d67ac841d1ecc47,PodSandboxId:2cf7de48012bc0bd76c022290d39f44346ab48dc308ba03c8a2b7592ee1bf0c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722276773637862744,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c81fea3f1e8d8b5837d8fd5e0fbf3179,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50e157ab37db5c68192e7b2b7e3076313c185dbd66a655479fbb2e7734b5e867,PodSandboxId:727cc0302f07873cbccb55fd55fe2f362933ee30f161847cc1100a8640547207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722276773629250586,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 39d53f327d13e2e2f8364bb45a65d4e3,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e7b48b62dee585586f5a253e0ca6994cc5768b2bd7aa221be1ed155b447320,PodSandboxId:6144f2536596a7ebad79ade018215c80cf0e7e3a3f1df76a9a726847036938ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722276773616176342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: e5e57568f67623875ef0fb6c47810dbe,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b420d47749a9f4b9fe460adc3e89d5e74ff89b8d5a84d1a59b6eab04f91fb633,PodSandboxId:0d9e2f097ab38ac68207df4110ffed3ceed7a3d07d8708a6c59722f7e33d7882,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722276757712970405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g8xmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c9abcf-0679-40c2-
90f5-be93f0cc0fc0,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1d782dc7a3a659e4dce24410747b1b31dfd6747604bc674f1cf178e34c299b6,PodSandboxId:877d299bf2e2f7d028effffceb115fc7462be4796da6dbac3c396b46112cd49a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722276757696135989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97e28563-2221-46ec-84a6-a351456c
cde4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38fb4ef4663f1ac13ccfda6a8fb8a90f54f2bb868bfac2903a5892efff31441a,PodSandboxId:8094ca34d057f6f5fa2858a056427da883b7af85f3050dc561e18a4cb48d7609,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276758659383962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jd528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727eab3-b9c8-46fa-8ea2-c86753672581,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1400471e3bb43ed29b6ec9dcbb7180775535a6f2c6d76bc7a98c2cb5d7f9b7c7,PodSandboxId:96c48adf35702a2da2b7359ebd5d1e195f8623253e1fa62d10cff4408b2b6994,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722276758337857321,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-8qpq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efa7c44-7be4-48c2-826d-6e97215e729b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe18b0df01d37ebc5b532fc3662d194eec78672248d240caf7e6a8ab44774a5,PodSandboxId:2cf7de48012bc0bd76c022290d39f44346ab48dc308ba03c8a2b7592ee1bf0c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722276757581620070,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c81fea3f1e8d8b5837d8fd5e0fbf3179,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dace301b8f4c615e9a99a0206497609a3f92dcff97f5e063dddcaa90839fc4,PodSandboxId:727cc0302f07873cbccb55fd55fe2f362933ee30f161847cc1100a8640547207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722276757492273030,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39d53f327d13e2e2f8364bb45a65d4e3,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ec59ec5ad683ab865f43b358daadaf3cf9b0c5f5781b801f2b19ded4d15629,PodSandboxId:9c8a35e275245e9e49a3c278fc7ebdc6e90fb8f358348edb30b5a2c3259a4372,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722276757344824503,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0234254a7269f5bb4d2c43a5e27d1da7,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595c38d05ab43783ead8c77666375934a9ed71f6cce133f1347b0b403fce2949,PodSandboxId:6144f2536596a7ebad79ade018215c80cf0e7e3a3f1df76a9a726847036938ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722276757221147060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-372591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5e57568f67623875ef0fb6c47810dbe,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e4f6e9dddba3390d161c8c0a7ddbe98f65cc03b0d15b26ba8f0aac9832ef16e,PodSandboxId:76204356bf06e4b6bf88205a7a4fbe15eeed7c6a0cf982a64d30047393dc6980,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722276721523327280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jd528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727eab3-b9c8-46fa-8ea2-c86753672581,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421dc2b428779aa047f0ee87ebdc8a30e998a732708f5ae0476e771c9e8ad7a5,PodSandboxId:a1406241dc60c403f3f70df69c1308a5a8d3e43533f8360aaf8ae5273755cac4,Metadata:&ContainerMeta
data{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722276719629664928,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g8xmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c9abcf-0679-40c2-90f5-be93f0cc0fc0,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75a4e619-fe28-428c-b271-b90e9cb4e0c4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	19283fd5d6633       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago        Running             coredns                   2                   96c48adf35702       coredns-5cfdc65f69-8qpq6
	dd976d2cb71fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       2                   877d299bf2e2f       storage-provisioner
	c4cf33b6ddf42       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   7 seconds ago        Running             etcd                      2                   9c8a35e275245       etcd-kubernetes-upgrade-372591
	63ee469a94495       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   7 seconds ago        Running             kube-scheduler            2                   2cf7de48012bc       kube-scheduler-kubernetes-upgrade-372591
	50e157ab37db5       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   7 seconds ago        Running             kube-controller-manager   2                   727cc0302f078       kube-controller-manager-kubernetes-upgrade-372591
	53e7b48b62dee       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   7 seconds ago        Running             kube-apiserver            2                   6144f2536596a       kube-apiserver-kubernetes-upgrade-372591
	38fb4ef4663f1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   22 seconds ago       Running             coredns                   1                   8094ca34d057f       coredns-5cfdc65f69-jd528
	1400471e3bb43       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   23 seconds ago       Exited              coredns                   1                   96c48adf35702       coredns-5cfdc65f69-8qpq6
	b420d47749a9f       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   23 seconds ago       Running             kube-proxy                1                   0d9e2f097ab38       kube-proxy-g8xmr
	f1d782dc7a3a6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   23 seconds ago       Exited              storage-provisioner       1                   877d299bf2e2f       storage-provisioner
	2fe18b0df01d3       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   23 seconds ago       Exited              kube-scheduler            1                   2cf7de48012bc       kube-scheduler-kubernetes-upgrade-372591
	c5dace301b8f4       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   24 seconds ago       Exited              kube-controller-manager   1                   727cc0302f078       kube-controller-manager-kubernetes-upgrade-372591
	02ec59ec5ad68       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   24 seconds ago       Exited              etcd                      1                   9c8a35e275245       etcd-kubernetes-upgrade-372591
	595c38d05ab43       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   24 seconds ago       Exited              kube-apiserver            1                   6144f2536596a       kube-apiserver-kubernetes-upgrade-372591
	2e4f6e9dddba3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   76204356bf06e       coredns-5cfdc65f69-jd528
	421dc2b428779       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   About a minute ago   Exited              kube-proxy                0                   a1406241dc60c       kube-proxy-g8xmr
	
	
	==> coredns [1400471e3bb43ed29b6ec9dcbb7180775535a6f2c6d76bc7a98c2cb5d7f9b7c7] <==
	
	
	==> coredns [19283fd5d6633efa7156084c7eb70cfe6a7d9b0009edd64ae2dbad4c33a4cc0e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [2e4f6e9dddba3390d161c8c0a7ddbe98f65cc03b0d15b26ba8f0aac9832ef16e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [38fb4ef4663f1ac13ccfda6a8fb8a90f54f2bb868bfac2903a5892efff31441a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-372591
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-372591
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:11:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-372591
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:12:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:12:57 +0000   Mon, 29 Jul 2024 18:11:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:12:57 +0000   Mon, 29 Jul 2024 18:11:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:12:57 +0000   Mon, 29 Jul 2024 18:11:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:12:57 +0000   Mon, 29 Jul 2024 18:11:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    kubernetes-upgrade-372591
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1c9f4e64b24b412faed278171798a755
	  System UUID:                1c9f4e64-b24b-412f-aed2-78171798a755
	  Boot ID:                    bdb73f64-0d5a-4fa5-a6ef-37217cbbab7e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-8qpq6                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     64s
	  kube-system                 coredns-5cfdc65f69-jd528                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     64s
	  kube-system                 etcd-kubernetes-upgrade-372591                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         67s
	  kube-system                 kube-apiserver-kubernetes-upgrade-372591             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-372591    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-proxy-g8xmr                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-scheduler-kubernetes-upgrade-372591             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20s                kube-proxy       
	  Normal  Starting                 62s                kube-proxy       
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  74s (x8 over 76s)  kubelet          Node kubernetes-upgrade-372591 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     74s (x7 over 76s)  kubelet          Node kubernetes-upgrade-372591 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    74s (x8 over 76s)  kubelet          Node kubernetes-upgrade-372591 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           63s                node-controller  Node kubernetes-upgrade-372591 event: Registered Node kubernetes-upgrade-372591 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-372591 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-372591 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-372591 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-372591 event: Registered Node kubernetes-upgrade-372591 in Controller
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.551779] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.058680] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065089] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.192945] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.145045] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.295549] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +4.127466] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +2.375033] systemd-fstab-generator[863]: Ignoring "noauto" option for root device
	[  +0.060033] kauditd_printk_skb: 158 callbacks suppressed
	[  +9.577286] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	[  +0.075433] kauditd_printk_skb: 69 callbacks suppressed
	[Jul29 18:12] kauditd_printk_skb: 65 callbacks suppressed
	[ +33.400382] systemd-fstab-generator[2222]: Ignoring "noauto" option for root device
	[  +0.085781] kauditd_printk_skb: 34 callbacks suppressed
	[  +0.063713] systemd-fstab-generator[2234]: Ignoring "noauto" option for root device
	[  +0.187310] systemd-fstab-generator[2248]: Ignoring "noauto" option for root device
	[  +0.157080] systemd-fstab-generator[2260]: Ignoring "noauto" option for root device
	[  +0.298504] systemd-fstab-generator[2288]: Ignoring "noauto" option for root device
	[  +1.111096] systemd-fstab-generator[2440]: Ignoring "noauto" option for root device
	[  +4.663576] kauditd_printk_skb: 229 callbacks suppressed
	[ +11.675433] systemd-fstab-generator[3539]: Ignoring "noauto" option for root device
	[  +5.642177] kauditd_printk_skb: 36 callbacks suppressed
	[  +0.660953] systemd-fstab-generator[3895]: Ignoring "noauto" option for root device
	
	
	==> etcd [02ec59ec5ad683ab865f43b358daadaf3cf9b0c5f5781b801f2b19ded4d15629] <==
	{"level":"info","ts":"2024-07-29T18:12:39.429169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T18:12:39.429201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f received MsgPreVoteResp from 4e6b9cdcc1ed933f at term 2"}
	{"level":"info","ts":"2024-07-29T18:12:39.429245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T18:12:39.429254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f received MsgVoteResp from 4e6b9cdcc1ed933f at term 3"}
	{"level":"info","ts":"2024-07-29T18:12:39.429262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became leader at term 3"}
	{"level":"info","ts":"2024-07-29T18:12:39.42927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e6b9cdcc1ed933f elected leader 4e6b9cdcc1ed933f at term 3"}
	{"level":"info","ts":"2024-07-29T18:12:39.432136Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:12:39.43312Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T18:12:39.433883Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.171:2379"}
	{"level":"info","ts":"2024-07-29T18:12:39.43411Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:12:39.434685Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T18:12:39.435395Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T18:12:39.432083Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4e6b9cdcc1ed933f","local-member-attributes":"{Name:kubernetes-upgrade-372591 ClientURLs:[https://192.168.39.171:2379]}","request-path":"/0/members/4e6b9cdcc1ed933f/attributes","cluster-id":"c9ee22fca1de3e71","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:12:39.445788Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T18:12:39.445842Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T18:12:41.119417Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T18:12:41.119465Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-372591","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.171:2380"],"advertise-client-urls":["https://192.168.39.171:2379"]}
	{"level":"warn","ts":"2024-07-29T18:12:41.119536Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T18:12:41.119617Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T18:12:41.148294Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.171:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T18:12:41.148395Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.171:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T18:12:41.148503Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4e6b9cdcc1ed933f","current-leader-member-id":"4e6b9cdcc1ed933f"}
	{"level":"info","ts":"2024-07-29T18:12:41.155298Z","caller":"embed/etcd.go:580","msg":"stopping serving peer traffic","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-07-29T18:12:41.155401Z","caller":"embed/etcd.go:585","msg":"stopped serving peer traffic","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-07-29T18:12:41.155412Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-372591","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.171:2380"],"advertise-client-urls":["https://192.168.39.171:2379"]}
	
	
	==> etcd [c4cf33b6ddf42b20cac9805b2e1bfd434576b594d6737ebe13b386c459a067fb] <==
	{"level":"info","ts":"2024-07-29T18:12:54.080707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f switched to configuration voters=(5650782629426729791)"}
	{"level":"info","ts":"2024-07-29T18:12:54.080926Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c9ee22fca1de3e71","local-member-id":"4e6b9cdcc1ed933f","added-peer-id":"4e6b9cdcc1ed933f","added-peer-peer-urls":["https://192.168.39.171:2380"]}
	{"level":"info","ts":"2024-07-29T18:12:54.081151Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c9ee22fca1de3e71","local-member-id":"4e6b9cdcc1ed933f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:12:54.081247Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:12:54.099009Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T18:12:54.099234Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-07-29T18:12:54.099352Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-07-29T18:12:54.105421Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"4e6b9cdcc1ed933f","initial-advertise-peer-urls":["https://192.168.39.171:2380"],"listen-peer-urls":["https://192.168.39.171:2380"],"advertise-client-urls":["https://192.168.39.171:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.171:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T18:12:54.105494Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T18:12:55.92832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-29T18:12:55.928448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-29T18:12:55.928518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f received MsgPreVoteResp from 4e6b9cdcc1ed933f at term 3"}
	{"level":"info","ts":"2024-07-29T18:12:55.928562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became candidate at term 4"}
	{"level":"info","ts":"2024-07-29T18:12:55.928603Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f received MsgVoteResp from 4e6b9cdcc1ed933f at term 4"}
	{"level":"info","ts":"2024-07-29T18:12:55.928634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became leader at term 4"}
	{"level":"info","ts":"2024-07-29T18:12:55.928663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e6b9cdcc1ed933f elected leader 4e6b9cdcc1ed933f at term 4"}
	{"level":"info","ts":"2024-07-29T18:12:55.933943Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4e6b9cdcc1ed933f","local-member-attributes":"{Name:kubernetes-upgrade-372591 ClientURLs:[https://192.168.39.171:2379]}","request-path":"/0/members/4e6b9cdcc1ed933f/attributes","cluster-id":"c9ee22fca1de3e71","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:12:55.934086Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:12:55.934562Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:12:55.935287Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T18:12:55.935719Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T18:12:55.93585Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T18:12:55.936173Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T18:12:55.935307Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T18:12:55.951281Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.171:2379"}
	
	
	==> kernel <==
	 18:13:02 up 1 min,  0 users,  load average: 2.10, 0.65, 0.23
	Linux kubernetes-upgrade-372591 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [53e7b48b62dee585586f5a253e0ca6994cc5768b2bd7aa221be1ed155b447320] <==
	I0729 18:12:57.147429       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0729 18:12:57.311515       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 18:12:57.312317       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 18:12:57.312371       1 policy_source.go:224] refreshing policies
	I0729 18:12:57.338864       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 18:12:57.346518       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 18:12:57.347056       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 18:12:57.347716       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0729 18:12:57.347804       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0729 18:12:57.351144       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 18:12:57.351319       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 18:12:57.351882       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 18:12:57.352066       1 aggregator.go:171] initial CRD sync complete...
	I0729 18:12:57.352121       1 autoregister_controller.go:144] Starting autoregister controller
	I0729 18:12:57.352145       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 18:12:57.352214       1 cache.go:39] Caches are synced for autoregister controller
	I0729 18:12:57.362269       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0729 18:12:58.161404       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 18:12:58.895669       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 18:12:58.907860       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 18:12:58.958131       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 18:12:59.080619       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 18:12:59.091204       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 18:13:00.233901       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 18:13:00.886153       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [595c38d05ab43783ead8c77666375934a9ed71f6cce133f1347b0b403fce2949] <==
	W0729 18:12:50.428519       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.492551       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.532061       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.539498       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.544129       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.551865       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.556409       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.574999       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.587627       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.646654       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.703287       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.708980       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.710279       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.734923       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.748377       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.772428       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.800641       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.828704       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.883689       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.897672       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.948123       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:50.952160       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:51.194447       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:51.207414       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:12:51.217012       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [50e157ab37db5c68192e7b2b7e3076313c185dbd66a655479fbb2e7734b5e867] <==
	I0729 18:13:01.065803       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0729 18:13:01.066924       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0729 18:13:01.068065       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0729 18:13:01.070381       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0729 18:13:01.080606       1 shared_informer.go:320] Caches are synced for taint
	I0729 18:13:01.080896       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 18:13:01.081027       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-372591"
	I0729 18:13:01.081087       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 18:13:01.129304       1 shared_informer.go:320] Caches are synced for disruption
	I0729 18:13:01.191701       1 shared_informer.go:320] Caches are synced for deployment
	I0729 18:13:01.214536       1 shared_informer.go:320] Caches are synced for HPA
	I0729 18:13:01.368323       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 18:13:01.635944       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0729 18:13:01.660539       1 shared_informer.go:320] Caches are synced for crt configmap
	I0729 18:13:01.735893       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 18:13:01.744606       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 18:13:01.765473       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 18:13:01.765382       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 18:13:01.785352       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 18:13:01.786558       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 18:13:01.786593       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 18:13:01.806477       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 18:13:01.819405       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 18:13:01.823718       1 shared_informer.go:320] Caches are synced for expand
	I0729 18:13:01.830810       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [c5dace301b8f4c615e9a99a0206497609a3f92dcff97f5e063dddcaa90839fc4] <==
	I0729 18:12:38.709221       1 serving.go:386] Generated self-signed cert in-memory
	I0729 18:12:39.333705       1 controllermanager.go:188] "Starting" version="v1.31.0-beta.0"
	I0729 18:12:39.333797       1 controllermanager.go:190] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:12:39.335233       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 18:12:39.335408       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0729 18:12:39.335613       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 18:12:39.335833       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [421dc2b428779aa047f0ee87ebdc8a30e998a732708f5ae0476e771c9e8ad7a5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 18:11:59.895956       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 18:11:59.912439       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.171"]
	E0729 18:11:59.912542       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 18:11:59.957253       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 18:11:59.957326       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:11:59.957370       1 server_linux.go:170] "Using iptables Proxier"
	I0729 18:11:59.960091       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 18:11:59.960413       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 18:11:59.960456       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:11:59.961959       1 config.go:197] "Starting service config controller"
	I0729 18:11:59.962004       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:11:59.962044       1 config.go:104] "Starting endpoint slice config controller"
	I0729 18:11:59.962060       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:11:59.962696       1 config.go:326] "Starting node config controller"
	I0729 18:11:59.962733       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:12:00.062165       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:12:00.062230       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:12:00.063607       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b420d47749a9f4b9fe460adc3e89d5e74ff89b8d5a84d1a59b6eab04f91fb633] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 18:12:39.582617       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 18:12:40.924004       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.171"]
	E0729 18:12:40.926971       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 18:12:41.080116       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 18:12:41.080218       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:12:41.080261       1 server_linux.go:170] "Using iptables Proxier"
	I0729 18:12:41.085001       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 18:12:41.085271       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 18:12:41.085305       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:12:41.087046       1 config.go:197] "Starting service config controller"
	I0729 18:12:41.087080       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:12:41.087114       1 config.go:104] "Starting endpoint slice config controller"
	I0729 18:12:41.087119       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:12:41.087717       1 config.go:326] "Starting node config controller"
	I0729 18:12:41.087727       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:12:41.188123       1 shared_informer.go:320] Caches are synced for node config
	I0729 18:12:41.188190       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:12:41.188231       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2fe18b0df01d37ebc5b532fc3662d194eec78672248d240caf7e6a8ab44774a5] <==
	I0729 18:12:39.299100       1 serving.go:386] Generated self-signed cert in-memory
	W0729 18:12:40.792530       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 18:12:40.792644       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 18:12:40.792689       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 18:12:40.792713       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 18:12:40.854107       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0729 18:12:40.854187       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0729 18:12:40.854225       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0729 18:12:40.858514       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0729 18:12:40.861057       1 server.go:237] "waiting for handlers to sync" err="context canceled"
	E0729 18:12:40.861158       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [63ee469a94495a1b93b59eb9234d3dbe59dcdc6a9c47f1ee7d67ac841d1ecc47] <==
	I0729 18:12:54.603790       1 serving.go:386] Generated self-signed cert in-memory
	W0729 18:12:57.225203       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 18:12:57.225247       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 18:12:57.225258       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 18:12:57.225264       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 18:12:57.292915       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0729 18:12:57.292950       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:12:57.301473       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 18:12:57.301693       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 18:12:57.302328       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 18:12:57.302393       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0729 18:12:57.403003       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 18:12:53 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:53.363976    3546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39d53f327d13e2e2f8364bb45a65d4e3-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-372591\" (UID: \"39d53f327d13e2e2f8364bb45a65d4e3\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-372591"
	Jul 29 18:12:53 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:53.363991    3546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39d53f327d13e2e2f8364bb45a65d4e3-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-372591\" (UID: \"39d53f327d13e2e2f8364bb45a65d4e3\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-372591"
	Jul 29 18:12:53 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:53.364007    3546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c81fea3f1e8d8b5837d8fd5e0fbf3179-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-372591\" (UID: \"c81fea3f1e8d8b5837d8fd5e0fbf3179\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-372591"
	Jul 29 18:12:53 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:53.466494    3546 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-372591"
	Jul 29 18:12:53 kubernetes-upgrade-372591 kubelet[3546]: E0729 18:12:53.467245    3546 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.171:8443: connect: connection refused" node="kubernetes-upgrade-372591"
	Jul 29 18:12:53 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:53.598565    3546 scope.go:117] "RemoveContainer" containerID="595c38d05ab43783ead8c77666375934a9ed71f6cce133f1347b0b403fce2949"
	Jul 29 18:12:53 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:53.600807    3546 scope.go:117] "RemoveContainer" containerID="2fe18b0df01d37ebc5b532fc3662d194eec78672248d240caf7e6a8ab44774a5"
	Jul 29 18:12:53 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:53.600956    3546 scope.go:117] "RemoveContainer" containerID="c5dace301b8f4c615e9a99a0206497609a3f92dcff97f5e063dddcaa90839fc4"
	Jul 29 18:12:53 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:53.601435    3546 scope.go:117] "RemoveContainer" containerID="02ec59ec5ad683ab865f43b358daadaf3cf9b0c5f5781b801f2b19ded4d15629"
	Jul 29 18:12:53 kubernetes-upgrade-372591 kubelet[3546]: E0729 18:12:53.765398    3546 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-372591?timeout=10s\": dial tcp 192.168.39.171:8443: connect: connection refused" interval="800ms"
	Jul 29 18:12:53 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:53.869228    3546 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-372591"
	Jul 29 18:12:53 kubernetes-upgrade-372591 kubelet[3546]: E0729 18:12:53.870009    3546 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.171:8443: connect: connection refused" node="kubernetes-upgrade-372591"
	Jul 29 18:12:54 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:54.671586    3546 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-372591"
	Jul 29 18:12:57 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:57.377225    3546 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-372591"
	Jul 29 18:12:57 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:57.377594    3546 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-372591"
	Jul 29 18:12:57 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:57.377653    3546 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 18:12:57 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:57.378688    3546 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 18:12:57 kubernetes-upgrade-372591 kubelet[3546]: E0729 18:12:57.399157    3546 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-372591\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-372591"
	Jul 29 18:12:58 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:58.145377    3546 apiserver.go:52] "Watching apiserver"
	Jul 29 18:12:58 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:58.159221    3546 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 29 18:12:58 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:58.167617    3546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90c9abcf-0679-40c2-90f5-be93f0cc0fc0-xtables-lock\") pod \"kube-proxy-g8xmr\" (UID: \"90c9abcf-0679-40c2-90f5-be93f0cc0fc0\") " pod="kube-system/kube-proxy-g8xmr"
	Jul 29 18:12:58 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:58.168062    3546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/97e28563-2221-46ec-84a6-a351456ccde4-tmp\") pod \"storage-provisioner\" (UID: \"97e28563-2221-46ec-84a6-a351456ccde4\") " pod="kube-system/storage-provisioner"
	Jul 29 18:12:58 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:58.168240    3546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90c9abcf-0679-40c2-90f5-be93f0cc0fc0-lib-modules\") pod \"kube-proxy-g8xmr\" (UID: \"90c9abcf-0679-40c2-90f5-be93f0cc0fc0\") " pod="kube-system/kube-proxy-g8xmr"
	Jul 29 18:12:58 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:58.452219    3546 scope.go:117] "RemoveContainer" containerID="f1d782dc7a3a659e4dce24410747b1b31dfd6747604bc674f1cf178e34c299b6"
	Jul 29 18:12:58 kubernetes-upgrade-372591 kubelet[3546]: I0729 18:12:58.453155    3546 scope.go:117] "RemoveContainer" containerID="1400471e3bb43ed29b6ec9dcbb7180775535a6f2c6d76bc7a98c2cb5d7f9b7c7"
	
	
	==> storage-provisioner [dd976d2cb71fbfbf2cae7f0d82d11362761275f462c4b927341d0f688e47b6a3] <==
	I0729 18:12:58.620095       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 18:12:58.646986       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 18:12:58.647048       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [f1d782dc7a3a659e4dce24410747b1b31dfd6747604bc674f1cf178e34c299b6] <==
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc000398480, 0x0, 0x0, 0x7f77a5e5b100)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003e4c80, 0x18e5530, 0xc0000b9440, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00021f260)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00021f260, 0x18b3d60, 0xc0002264b0, 0xc000398101, 0xc0001b04e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00021f260, 0x3b9aca00, 0x0, 0x1, 0xc0001b04e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc00021f260, 0x3b9aca00, 0xc0001b04e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	goroutine 91 [runnable]:
	k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc000234360, 0x77359400, 0x0, 0xc000234300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:588 +0x135
	created by k8s.io/apimachinery/pkg/util/wait.poller.func1
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:571 +0x8c
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:13:00.805621   62510 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19345-11206/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-372591 -n kubernetes-upgrade-372591
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-372591 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-372591" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-372591
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-372591: (1.036238934s)
--- FAIL: TestKubernetesUpgrade (404.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (283.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-386663 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0729 18:16:52.902490   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-386663 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m42.961375615s)

                                                
                                                
-- stdout --
	* [old-k8s-version-386663] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19345
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-386663" primary control-plane node in "old-k8s-version-386663" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:16:25.824709   70090 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:16:25.824955   70090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:16:25.824968   70090 out.go:304] Setting ErrFile to fd 2...
	I0729 18:16:25.824976   70090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:16:25.825481   70090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:16:25.826479   70090 out.go:298] Setting JSON to false
	I0729 18:16:25.827701   70090 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7138,"bootTime":1722269848,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:16:25.827762   70090 start.go:139] virtualization: kvm guest
	I0729 18:16:25.829510   70090 out.go:177] * [old-k8s-version-386663] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:16:25.831099   70090 notify.go:220] Checking for updates...
	I0729 18:16:25.831111   70090 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 18:16:25.832384   70090 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:16:25.833845   70090 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:16:25.835211   70090 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:16:25.836387   70090 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:16:25.837531   70090 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:16:25.839232   70090 config.go:182] Loaded profile config "bridge-729010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:16:25.839380   70090 config.go:182] Loaded profile config "enable-default-cni-729010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:16:25.839497   70090 config.go:182] Loaded profile config "flannel-729010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:16:25.839656   70090 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:16:25.877300   70090 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 18:16:25.878422   70090 start.go:297] selected driver: kvm2
	I0729 18:16:25.878439   70090 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:16:25.878452   70090 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:16:25.879155   70090 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:16:25.879235   70090 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:16:25.894561   70090 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:16:25.894614   70090 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 18:16:25.894833   70090 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:16:25.894894   70090 cni.go:84] Creating CNI manager for ""
	I0729 18:16:25.894912   70090 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:16:25.894925   70090 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 18:16:25.894987   70090 start.go:340] cluster config:
	{Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:16:25.895097   70090 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:16:25.896665   70090 out.go:177] * Starting "old-k8s-version-386663" primary control-plane node in "old-k8s-version-386663" cluster
	I0729 18:16:25.898185   70090 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:16:25.898217   70090 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:16:25.898228   70090 cache.go:56] Caching tarball of preloaded images
	I0729 18:16:25.898321   70090 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:16:25.898333   70090 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 18:16:25.898445   70090 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json ...
	I0729 18:16:25.898469   70090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json: {Name:mk92e91f8399ba5b0a8bf97660beb037c51b9dc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:16:25.898607   70090 start.go:360] acquireMachinesLock for old-k8s-version-386663: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:16:37.051365   70090 start.go:364] duration metric: took 11.152734377s to acquireMachinesLock for "old-k8s-version-386663"
	I0729 18:16:37.051452   70090 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:16:37.051580   70090 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 18:16:37.053764   70090 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 18:16:37.053985   70090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:16:37.054034   70090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:16:37.070432   70090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42255
	I0729 18:16:37.070901   70090 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:16:37.071462   70090 main.go:141] libmachine: Using API Version  1
	I0729 18:16:37.071484   70090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:16:37.071851   70090 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:16:37.072056   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:16:37.072220   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:16:37.072389   70090 start.go:159] libmachine.API.Create for "old-k8s-version-386663" (driver="kvm2")
	I0729 18:16:37.072418   70090 client.go:168] LocalClient.Create starting
	I0729 18:16:37.072459   70090 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem
	I0729 18:16:37.072500   70090 main.go:141] libmachine: Decoding PEM data...
	I0729 18:16:37.072524   70090 main.go:141] libmachine: Parsing certificate...
	I0729 18:16:37.072608   70090 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem
	I0729 18:16:37.072635   70090 main.go:141] libmachine: Decoding PEM data...
	I0729 18:16:37.072649   70090 main.go:141] libmachine: Parsing certificate...
	I0729 18:16:37.072664   70090 main.go:141] libmachine: Running pre-create checks...
	I0729 18:16:37.072674   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .PreCreateCheck
	I0729 18:16:37.073059   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetConfigRaw
	I0729 18:16:37.073479   70090 main.go:141] libmachine: Creating machine...
	I0729 18:16:37.073492   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .Create
	I0729 18:16:37.073612   70090 main.go:141] libmachine: (old-k8s-version-386663) Creating KVM machine...
	I0729 18:16:37.074658   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found existing default KVM network
	I0729 18:16:37.075971   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:37.075798   70173 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ca:a4:44} reservation:<nil>}
	I0729 18:16:37.076987   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:37.076906   70173 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003500b0}
	I0729 18:16:37.077011   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | created network xml: 
	I0729 18:16:37.077022   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | <network>
	I0729 18:16:37.077032   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG |   <name>mk-old-k8s-version-386663</name>
	I0729 18:16:37.077041   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG |   <dns enable='no'/>
	I0729 18:16:37.077047   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG |   
	I0729 18:16:37.077058   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0729 18:16:37.077067   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG |     <dhcp>
	I0729 18:16:37.077082   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0729 18:16:37.077098   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG |     </dhcp>
	I0729 18:16:37.077109   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG |   </ip>
	I0729 18:16:37.077120   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG |   
	I0729 18:16:37.077127   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | </network>
	I0729 18:16:37.077141   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | 
	I0729 18:16:37.082437   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | trying to create private KVM network mk-old-k8s-version-386663 192.168.50.0/24...
	I0729 18:16:37.155168   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | private KVM network mk-old-k8s-version-386663 192.168.50.0/24 created
	I0729 18:16:37.155210   70090 main.go:141] libmachine: (old-k8s-version-386663) Setting up store path in /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663 ...
	I0729 18:16:37.155226   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:37.155173   70173 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:16:37.155249   70090 main.go:141] libmachine: (old-k8s-version-386663) Building disk image from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 18:16:37.155315   70090 main.go:141] libmachine: (old-k8s-version-386663) Downloading /home/jenkins/minikube-integration/19345-11206/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 18:16:37.416038   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:37.415927   70173 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa...
	I0729 18:16:37.516087   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:37.515980   70173 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/old-k8s-version-386663.rawdisk...
	I0729 18:16:37.516125   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | Writing magic tar header
	I0729 18:16:37.516144   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | Writing SSH key tar header
	I0729 18:16:37.516158   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:37.516109   70173 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663 ...
	I0729 18:16:37.516266   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663
	I0729 18:16:37.516289   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines
	I0729 18:16:37.516304   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:16:37.516315   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206
	I0729 18:16:37.516331   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:16:37.516342   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:16:37.516356   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | Checking permissions on dir: /home
	I0729 18:16:37.516367   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | Skipping /home - not owner
	I0729 18:16:37.516419   70090 main.go:141] libmachine: (old-k8s-version-386663) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663 (perms=drwx------)
	I0729 18:16:37.516443   70090 main.go:141] libmachine: (old-k8s-version-386663) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:16:37.516459   70090 main.go:141] libmachine: (old-k8s-version-386663) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube (perms=drwxr-xr-x)
	I0729 18:16:37.516474   70090 main.go:141] libmachine: (old-k8s-version-386663) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206 (perms=drwxrwxr-x)
	I0729 18:16:37.516486   70090 main.go:141] libmachine: (old-k8s-version-386663) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:16:37.516497   70090 main.go:141] libmachine: (old-k8s-version-386663) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:16:37.516507   70090 main.go:141] libmachine: (old-k8s-version-386663) Creating domain...
	I0729 18:16:37.518455   70090 main.go:141] libmachine: (old-k8s-version-386663) define libvirt domain using xml: 
	I0729 18:16:37.518480   70090 main.go:141] libmachine: (old-k8s-version-386663) <domain type='kvm'>
	I0729 18:16:37.518493   70090 main.go:141] libmachine: (old-k8s-version-386663)   <name>old-k8s-version-386663</name>
	I0729 18:16:37.518506   70090 main.go:141] libmachine: (old-k8s-version-386663)   <memory unit='MiB'>2200</memory>
	I0729 18:16:37.518516   70090 main.go:141] libmachine: (old-k8s-version-386663)   <vcpu>2</vcpu>
	I0729 18:16:37.518524   70090 main.go:141] libmachine: (old-k8s-version-386663)   <features>
	I0729 18:16:37.518537   70090 main.go:141] libmachine: (old-k8s-version-386663)     <acpi/>
	I0729 18:16:37.518544   70090 main.go:141] libmachine: (old-k8s-version-386663)     <apic/>
	I0729 18:16:37.518557   70090 main.go:141] libmachine: (old-k8s-version-386663)     <pae/>
	I0729 18:16:37.518567   70090 main.go:141] libmachine: (old-k8s-version-386663)     
	I0729 18:16:37.518576   70090 main.go:141] libmachine: (old-k8s-version-386663)   </features>
	I0729 18:16:37.518587   70090 main.go:141] libmachine: (old-k8s-version-386663)   <cpu mode='host-passthrough'>
	I0729 18:16:37.518619   70090 main.go:141] libmachine: (old-k8s-version-386663)   
	I0729 18:16:37.518644   70090 main.go:141] libmachine: (old-k8s-version-386663)   </cpu>
	I0729 18:16:37.518656   70090 main.go:141] libmachine: (old-k8s-version-386663)   <os>
	I0729 18:16:37.518667   70090 main.go:141] libmachine: (old-k8s-version-386663)     <type>hvm</type>
	I0729 18:16:37.518684   70090 main.go:141] libmachine: (old-k8s-version-386663)     <boot dev='cdrom'/>
	I0729 18:16:37.518694   70090 main.go:141] libmachine: (old-k8s-version-386663)     <boot dev='hd'/>
	I0729 18:16:37.518714   70090 main.go:141] libmachine: (old-k8s-version-386663)     <bootmenu enable='no'/>
	I0729 18:16:37.518724   70090 main.go:141] libmachine: (old-k8s-version-386663)   </os>
	I0729 18:16:37.518742   70090 main.go:141] libmachine: (old-k8s-version-386663)   <devices>
	I0729 18:16:37.518760   70090 main.go:141] libmachine: (old-k8s-version-386663)     <disk type='file' device='cdrom'>
	I0729 18:16:37.518777   70090 main.go:141] libmachine: (old-k8s-version-386663)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/boot2docker.iso'/>
	I0729 18:16:37.518788   70090 main.go:141] libmachine: (old-k8s-version-386663)       <target dev='hdc' bus='scsi'/>
	I0729 18:16:37.518801   70090 main.go:141] libmachine: (old-k8s-version-386663)       <readonly/>
	I0729 18:16:37.518814   70090 main.go:141] libmachine: (old-k8s-version-386663)     </disk>
	I0729 18:16:37.518828   70090 main.go:141] libmachine: (old-k8s-version-386663)     <disk type='file' device='disk'>
	I0729 18:16:37.518840   70090 main.go:141] libmachine: (old-k8s-version-386663)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:16:37.518891   70090 main.go:141] libmachine: (old-k8s-version-386663)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/old-k8s-version-386663.rawdisk'/>
	I0729 18:16:37.518910   70090 main.go:141] libmachine: (old-k8s-version-386663)       <target dev='hda' bus='virtio'/>
	I0729 18:16:37.518924   70090 main.go:141] libmachine: (old-k8s-version-386663)     </disk>
	I0729 18:16:37.518935   70090 main.go:141] libmachine: (old-k8s-version-386663)     <interface type='network'>
	I0729 18:16:37.518947   70090 main.go:141] libmachine: (old-k8s-version-386663)       <source network='mk-old-k8s-version-386663'/>
	I0729 18:16:37.518957   70090 main.go:141] libmachine: (old-k8s-version-386663)       <model type='virtio'/>
	I0729 18:16:37.518967   70090 main.go:141] libmachine: (old-k8s-version-386663)     </interface>
	I0729 18:16:37.518979   70090 main.go:141] libmachine: (old-k8s-version-386663)     <interface type='network'>
	I0729 18:16:37.519002   70090 main.go:141] libmachine: (old-k8s-version-386663)       <source network='default'/>
	I0729 18:16:37.519017   70090 main.go:141] libmachine: (old-k8s-version-386663)       <model type='virtio'/>
	I0729 18:16:37.519029   70090 main.go:141] libmachine: (old-k8s-version-386663)     </interface>
	I0729 18:16:37.519040   70090 main.go:141] libmachine: (old-k8s-version-386663)     <serial type='pty'>
	I0729 18:16:37.519053   70090 main.go:141] libmachine: (old-k8s-version-386663)       <target port='0'/>
	I0729 18:16:37.519064   70090 main.go:141] libmachine: (old-k8s-version-386663)     </serial>
	I0729 18:16:37.519075   70090 main.go:141] libmachine: (old-k8s-version-386663)     <console type='pty'>
	I0729 18:16:37.519083   70090 main.go:141] libmachine: (old-k8s-version-386663)       <target type='serial' port='0'/>
	I0729 18:16:37.519093   70090 main.go:141] libmachine: (old-k8s-version-386663)     </console>
	I0729 18:16:37.519106   70090 main.go:141] libmachine: (old-k8s-version-386663)     <rng model='virtio'>
	I0729 18:16:37.519119   70090 main.go:141] libmachine: (old-k8s-version-386663)       <backend model='random'>/dev/random</backend>
	I0729 18:16:37.519127   70090 main.go:141] libmachine: (old-k8s-version-386663)     </rng>
	I0729 18:16:37.519139   70090 main.go:141] libmachine: (old-k8s-version-386663)     
	I0729 18:16:37.519149   70090 main.go:141] libmachine: (old-k8s-version-386663)     
	I0729 18:16:37.519161   70090 main.go:141] libmachine: (old-k8s-version-386663)   </devices>
	I0729 18:16:37.519177   70090 main.go:141] libmachine: (old-k8s-version-386663) </domain>
	I0729 18:16:37.519203   70090 main.go:141] libmachine: (old-k8s-version-386663) 
	I0729 18:16:37.523395   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:47:22:1b in network default
	I0729 18:16:37.524164   70090 main.go:141] libmachine: (old-k8s-version-386663) Ensuring networks are active...
	I0729 18:16:37.524182   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:16:37.524938   70090 main.go:141] libmachine: (old-k8s-version-386663) Ensuring network default is active
	I0729 18:16:37.525257   70090 main.go:141] libmachine: (old-k8s-version-386663) Ensuring network mk-old-k8s-version-386663 is active
	I0729 18:16:37.525959   70090 main.go:141] libmachine: (old-k8s-version-386663) Getting domain xml...
	I0729 18:16:37.526846   70090 main.go:141] libmachine: (old-k8s-version-386663) Creating domain...
	I0729 18:16:38.899194   70090 main.go:141] libmachine: (old-k8s-version-386663) Waiting to get IP...
	I0729 18:16:38.900476   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:16:38.901359   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:16:38.901517   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:38.901433   70173 retry.go:31] will retry after 249.446166ms: waiting for machine to come up
	I0729 18:16:39.152925   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:16:39.153540   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:16:39.153568   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:39.153463   70173 retry.go:31] will retry after 334.775688ms: waiting for machine to come up
	I0729 18:16:39.489823   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:16:39.490437   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:16:39.490461   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:39.490373   70173 retry.go:31] will retry after 457.892328ms: waiting for machine to come up
	I0729 18:16:39.949848   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:16:39.950425   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:16:39.950453   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:39.950397   70173 retry.go:31] will retry after 492.438924ms: waiting for machine to come up
	I0729 18:16:40.444775   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:16:40.445146   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:16:40.445174   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:40.445100   70173 retry.go:31] will retry after 531.282132ms: waiting for machine to come up
	I0729 18:16:40.977796   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:16:40.978267   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:16:40.978288   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:40.978237   70173 retry.go:31] will retry after 827.282459ms: waiting for machine to come up
	I0729 18:16:41.807224   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:16:41.807801   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:16:41.807828   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:41.807755   70173 retry.go:31] will retry after 784.977149ms: waiting for machine to come up
	I0729 18:16:42.594250   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:16:42.594818   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:16:42.594843   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:42.594787   70173 retry.go:31] will retry after 1.413519074s: waiting for machine to come up
	I0729 18:16:44.010342   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:16:44.010954   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:16:44.010983   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:44.010918   70173 retry.go:31] will retry after 1.311254911s: waiting for machine to come up
	I0729 18:16:45.323511   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:16:45.324016   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:16:45.324042   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:45.323984   70173 retry.go:31] will retry after 2.048450391s: waiting for machine to come up
	I0729 18:16:47.373954   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:16:47.374745   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:16:47.374772   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:47.374702   70173 retry.go:31] will retry after 2.804780404s: waiting for machine to come up
	I0729 18:16:50.182600   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:16:50.183216   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:16:50.183286   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:50.183177   70173 retry.go:31] will retry after 2.589806593s: waiting for machine to come up
	I0729 18:16:52.774313   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:16:52.774907   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:16:52.774929   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:52.774861   70173 retry.go:31] will retry after 3.036529063s: waiting for machine to come up
	I0729 18:16:55.813046   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:16:55.813555   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:16:55.813597   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:16:55.813532   70173 retry.go:31] will retry after 5.582948285s: waiting for machine to come up
	I0729 18:17:01.399785   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:01.400415   70090 main.go:141] libmachine: (old-k8s-version-386663) Found IP for machine: 192.168.50.70
	I0729 18:17:01.400439   70090 main.go:141] libmachine: (old-k8s-version-386663) Reserving static IP address...
	I0729 18:17:01.400452   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has current primary IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:01.400773   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-386663", mac: "52:54:00:78:b6:ac", ip: "192.168.50.70"} in network mk-old-k8s-version-386663
	I0729 18:17:01.480873   70090 main.go:141] libmachine: (old-k8s-version-386663) Reserved static IP address: 192.168.50.70
	I0729 18:17:01.480921   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | Getting to WaitForSSH function...
	I0729 18:17:01.480933   70090 main.go:141] libmachine: (old-k8s-version-386663) Waiting for SSH to be available...
	I0729 18:17:01.483666   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:01.484136   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:minikube Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:01.484160   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:01.484346   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using SSH client type: external
	I0729 18:17:01.484374   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa (-rw-------)
	I0729 18:17:01.484402   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:17:01.484423   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | About to run SSH command:
	I0729 18:17:01.484436   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | exit 0
	I0729 18:17:01.623099   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | SSH cmd err, output: <nil>: 
	I0729 18:17:01.623386   70090 main.go:141] libmachine: (old-k8s-version-386663) KVM machine creation complete!
	I0729 18:17:01.623778   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetConfigRaw
	I0729 18:17:01.624397   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:17:01.624608   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:17:01.624751   70090 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 18:17:01.624767   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetState
	I0729 18:17:01.626150   70090 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 18:17:01.626167   70090 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 18:17:01.626175   70090 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 18:17:01.626184   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:17:01.628745   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:01.629213   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:01.629248   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:01.629372   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:17:01.629541   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:17:01.629714   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:17:01.629845   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:17:01.629988   70090 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:01.630233   70090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:17:01.630252   70090 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 18:17:01.742835   70090 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:17:01.742862   70090 main.go:141] libmachine: Detecting the provisioner...
	I0729 18:17:01.742874   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:17:01.746475   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:01.746881   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:01.746931   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:01.747136   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:17:01.747354   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:17:01.747553   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:17:01.747707   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:17:01.747872   70090 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:01.748119   70090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:17:01.748138   70090 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 18:17:01.863876   70090 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 18:17:01.863970   70090 main.go:141] libmachine: found compatible host: buildroot
	I0729 18:17:01.863987   70090 main.go:141] libmachine: Provisioning with buildroot...
	I0729 18:17:01.864001   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:17:01.864265   70090 buildroot.go:166] provisioning hostname "old-k8s-version-386663"
	I0729 18:17:01.864292   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:17:01.864462   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:17:01.868141   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:01.868705   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:01.868734   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:01.868875   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:17:01.869070   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:17:01.869266   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:17:01.869443   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:17:01.869639   70090 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:01.869840   70090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:17:01.869865   70090 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386663 && echo "old-k8s-version-386663" | sudo tee /etc/hostname
	I0729 18:17:02.005159   70090 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386663
	
	I0729 18:17:02.005189   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:17:02.008350   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:02.008800   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:02.008824   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:02.008984   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:17:02.009169   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:17:02.009332   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:17:02.009475   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:17:02.009651   70090 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:02.009861   70090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:17:02.009883   70090 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386663/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:17:02.133378   70090 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:17:02.133402   70090 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:17:02.133437   70090 buildroot.go:174] setting up certificates
	I0729 18:17:02.133451   70090 provision.go:84] configureAuth start
	I0729 18:17:02.133467   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:17:02.133753   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:17:02.137039   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:02.137482   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:02.137507   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:02.137690   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:17:02.140078   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:02.140443   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:02.140468   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:02.140618   70090 provision.go:143] copyHostCerts
	I0729 18:17:02.140666   70090 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:17:02.140678   70090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:17:02.140736   70090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:17:02.140872   70090 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:17:02.140883   70090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:17:02.140921   70090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:17:02.141466   70090 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:17:02.141477   70090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:17:02.141504   70090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:17:02.141579   70090 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386663 san=[127.0.0.1 192.168.50.70 localhost minikube old-k8s-version-386663]
	I0729 18:17:02.426003   70090 provision.go:177] copyRemoteCerts
	I0729 18:17:02.426103   70090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:17:02.426140   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:17:02.429541   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:02.429981   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:02.430048   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:02.430338   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:17:02.430588   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:17:02.430778   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:17:02.430981   70090 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:17:02.521741   70090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:17:02.552947   70090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 18:17:02.579299   70090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:17:02.607898   70090 provision.go:87] duration metric: took 474.430689ms to configureAuth
	I0729 18:17:02.607939   70090 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:17:02.608149   70090 config.go:182] Loaded profile config "old-k8s-version-386663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:17:02.608223   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:17:02.611334   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:02.611789   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:02.611826   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:02.611991   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:17:02.612167   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:17:02.612331   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:17:02.612503   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:17:02.612701   70090 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:02.612916   70090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:17:02.612933   70090 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:17:02.929860   70090 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:17:02.929889   70090 main.go:141] libmachine: Checking connection to Docker...
	I0729 18:17:02.929899   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetURL
	I0729 18:17:02.931586   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using libvirt version 6000000
	I0729 18:17:02.934398   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:02.934822   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:02.934856   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:02.934993   70090 main.go:141] libmachine: Docker is up and running!
	I0729 18:17:02.935005   70090 main.go:141] libmachine: Reticulating splines...
	I0729 18:17:02.935012   70090 client.go:171] duration metric: took 25.862584131s to LocalClient.Create
	I0729 18:17:02.935035   70090 start.go:167] duration metric: took 25.862648575s to libmachine.API.Create "old-k8s-version-386663"
	I0729 18:17:02.935048   70090 start.go:293] postStartSetup for "old-k8s-version-386663" (driver="kvm2")
	I0729 18:17:02.935062   70090 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:17:02.935083   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:17:02.935348   70090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:17:02.935377   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:17:02.937737   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:02.938055   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:02.938092   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:02.938228   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:17:02.938430   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:17:02.938644   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:17:02.938806   70090 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:17:03.026042   70090 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:17:03.031089   70090 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:17:03.031129   70090 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:17:03.031211   70090 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:17:03.031292   70090 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:17:03.031573   70090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:17:03.044092   70090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:17:03.071920   70090 start.go:296] duration metric: took 136.841535ms for postStartSetup
	I0729 18:17:03.071972   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetConfigRaw
	I0729 18:17:03.073033   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:17:03.076110   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:03.076525   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:03.076552   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:03.076787   70090 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json ...
	I0729 18:17:03.077032   70090 start.go:128] duration metric: took 26.025437663s to createHost
	I0729 18:17:03.077056   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:17:03.079523   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:03.079928   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:03.079994   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:03.080120   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:17:03.080309   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:17:03.080487   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:17:03.080631   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:17:03.080788   70090 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:03.080988   70090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:17:03.081002   70090 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 18:17:03.196020   70090 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277023.129580157
	
	I0729 18:17:03.196043   70090 fix.go:216] guest clock: 1722277023.129580157
	I0729 18:17:03.196053   70090 fix.go:229] Guest: 2024-07-29 18:17:03.129580157 +0000 UTC Remote: 2024-07-29 18:17:03.077045823 +0000 UTC m=+37.284879984 (delta=52.534334ms)
	I0729 18:17:03.196099   70090 fix.go:200] guest clock delta is within tolerance: 52.534334ms
	I0729 18:17:03.196106   70090 start.go:83] releasing machines lock for "old-k8s-version-386663", held for 26.144700639s
	I0729 18:17:03.196134   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:17:03.196438   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:17:03.199453   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:03.199857   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:03.199886   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:03.200049   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:17:03.200690   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:17:03.200899   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:17:03.201039   70090 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:17:03.201085   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:17:03.201133   70090 ssh_runner.go:195] Run: cat /version.json
	I0729 18:17:03.201159   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:17:03.203933   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:03.204040   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:03.204329   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:03.204356   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:03.204484   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:03.204512   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:03.204585   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:17:03.204690   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:17:03.204785   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:17:03.204899   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:17:03.204967   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:17:03.205051   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:17:03.205148   70090 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:17:03.205188   70090 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:17:03.296013   70090 ssh_runner.go:195] Run: systemctl --version
	I0729 18:17:03.317166   70090 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:17:03.485695   70090 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:17:03.492533   70090 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:17:03.492628   70090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:17:03.509929   70090 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:17:03.509956   70090 start.go:495] detecting cgroup driver to use...
	I0729 18:17:03.510018   70090 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:17:03.528207   70090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:17:03.544684   70090 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:17:03.544750   70090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:17:03.560422   70090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:17:03.576122   70090 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:17:03.699654   70090 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:17:03.868694   70090 docker.go:233] disabling docker service ...
	I0729 18:17:03.868767   70090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:17:03.886129   70090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:17:03.904213   70090 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:17:04.066776   70090 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:17:04.205898   70090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:17:04.221607   70090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:17:04.243390   70090 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 18:17:04.243460   70090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:04.255565   70090 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:17:04.255642   70090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:04.267403   70090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:04.279446   70090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:04.294170   70090 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:17:04.307947   70090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:17:04.319640   70090 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:17:04.319706   70090 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:17:04.335317   70090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:17:04.350977   70090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:17:04.511213   70090 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:17:04.701771   70090 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:17:04.701872   70090 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:17:04.711331   70090 start.go:563] Will wait 60s for crictl version
	I0729 18:17:04.711400   70090 ssh_runner.go:195] Run: which crictl
	I0729 18:17:04.718606   70090 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:17:04.766290   70090 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:17:04.766389   70090 ssh_runner.go:195] Run: crio --version
	I0729 18:17:04.802781   70090 ssh_runner.go:195] Run: crio --version
	I0729 18:17:04.841529   70090 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 18:17:04.842873   70090 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:17:04.846048   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:04.846461   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:16:53 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:17:04.846489   70090 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:17:04.846675   70090 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 18:17:04.852403   70090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:17:04.871101   70090 kubeadm.go:883] updating cluster {Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:17:04.871220   70090 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:17:04.871275   70090 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:17:04.917063   70090 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:17:04.917146   70090 ssh_runner.go:195] Run: which lz4
	I0729 18:17:04.922540   70090 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 18:17:04.928756   70090 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:17:04.928805   70090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 18:17:06.646799   70090 crio.go:462] duration metric: took 1.724281177s to copy over tarball
	I0729 18:17:06.646888   70090 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:17:09.465079   70090 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.818161943s)
	I0729 18:17:09.465101   70090 crio.go:469] duration metric: took 2.818271689s to extract the tarball
	I0729 18:17:09.465109   70090 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:17:09.509612   70090 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:17:09.564813   70090 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:17:09.564837   70090 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:17:09.564932   70090 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:17:09.564989   70090 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:17:09.565002   70090 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:17:09.565068   70090 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:17:09.564934   70090 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:17:09.565211   70090 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 18:17:09.565267   70090 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:17:09.565352   70090 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 18:17:09.566762   70090 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 18:17:09.566866   70090 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:17:09.567029   70090 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:17:09.567389   70090 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:17:09.567628   70090 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:17:09.567856   70090 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:17:09.567912   70090 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:17:09.567979   70090 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 18:17:09.719916   70090 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:17:09.746795   70090 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 18:17:09.747437   70090 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:17:09.749388   70090 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:17:09.752852   70090 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 18:17:09.755571   70090 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:17:09.778830   70090 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 18:17:09.818662   70090 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 18:17:09.818707   70090 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:17:09.818758   70090 ssh_runner.go:195] Run: which crictl
	I0729 18:17:09.881421   70090 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:17:09.908844   70090 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 18:17:09.908894   70090 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:17:09.908945   70090 ssh_runner.go:195] Run: which crictl
	I0729 18:17:09.915793   70090 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 18:17:09.915839   70090 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:17:09.915886   70090 ssh_runner.go:195] Run: which crictl
	I0729 18:17:09.916016   70090 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 18:17:09.916045   70090 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:17:09.916075   70090 ssh_runner.go:195] Run: which crictl
	I0729 18:17:09.939379   70090 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 18:17:09.939439   70090 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 18:17:09.939461   70090 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 18:17:09.939486   70090 ssh_runner.go:195] Run: which crictl
	I0729 18:17:09.939493   70090 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:17:09.939533   70090 ssh_runner.go:195] Run: which crictl
	I0729 18:17:09.954076   70090 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 18:17:09.954130   70090 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 18:17:09.954172   70090 ssh_runner.go:195] Run: which crictl
	I0729 18:17:09.954182   70090 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:17:10.096842   70090 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 18:17:10.096865   70090 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:17:10.096896   70090 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:17:10.096948   70090 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:17:10.096959   70090 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 18:17:10.096979   70090 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 18:17:10.097044   70090 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 18:17:10.230267   70090 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 18:17:10.236952   70090 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 18:17:10.237082   70090 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 18:17:10.238562   70090 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 18:17:10.247731   70090 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 18:17:10.247818   70090 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 18:17:10.247868   70090 cache_images.go:92] duration metric: took 683.015794ms to LoadCachedImages
	W0729 18:17:10.247954   70090 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0729 18:17:10.247968   70090 kubeadm.go:934] updating node { 192.168.50.70 8443 v1.20.0 crio true true} ...
	I0729 18:17:10.248105   70090 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386663 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:17:10.248179   70090 ssh_runner.go:195] Run: crio config
	I0729 18:17:10.297115   70090 cni.go:84] Creating CNI manager for ""
	I0729 18:17:10.297134   70090 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:17:10.297142   70090 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:17:10.297158   70090 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.70 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386663 NodeName:old-k8s-version-386663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 18:17:10.297287   70090 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386663"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:17:10.297354   70090 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 18:17:10.308486   70090 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:17:10.308558   70090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:17:10.319064   70090 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 18:17:10.340184   70090 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:17:10.360680   70090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 18:17:10.378766   70090 ssh_runner.go:195] Run: grep 192.168.50.70	control-plane.minikube.internal$ /etc/hosts
	I0729 18:17:10.382931   70090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:17:10.396340   70090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:17:10.536870   70090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:17:10.560001   70090 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663 for IP: 192.168.50.70
	I0729 18:17:10.560019   70090 certs.go:194] generating shared ca certs ...
	I0729 18:17:10.560034   70090 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:10.560195   70090 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:17:10.560254   70090 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:17:10.560267   70090 certs.go:256] generating profile certs ...
	I0729 18:17:10.560329   70090 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/client.key
	I0729 18:17:10.560347   70090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/client.crt with IP's: []
	I0729 18:17:10.658058   70090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/client.crt ...
	I0729 18:17:10.658084   70090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/client.crt: {Name:mk50d7ea2e6c077f2531d5371b8a2f688ccf6055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:10.658266   70090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/client.key ...
	I0729 18:17:10.658285   70090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/client.key: {Name:mka841b8b8919dcf3a43f1bfc014d3eec4b47777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:10.658418   70090 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key.71ea3f9f
	I0729 18:17:10.658441   70090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.crt.71ea3f9f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.70]
	I0729 18:17:10.798142   70090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.crt.71ea3f9f ...
	I0729 18:17:10.798178   70090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.crt.71ea3f9f: {Name:mkb771223c28136ae82beca29b7ff3d272bbf83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:10.798415   70090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key.71ea3f9f ...
	I0729 18:17:10.798439   70090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key.71ea3f9f: {Name:mkb07660883ea46d15af001fe4f8f71676b47cd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:10.798566   70090 certs.go:381] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.crt.71ea3f9f -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.crt
	I0729 18:17:10.798673   70090 certs.go:385] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key.71ea3f9f -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key
	I0729 18:17:10.798752   70090 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key
	I0729 18:17:10.798776   70090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.crt with IP's: []
	I0729 18:17:10.851215   70090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.crt ...
	I0729 18:17:10.851241   70090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.crt: {Name:mk235cc47c07b207e7abcc6135d36cd1a7fbc8d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:10.851397   70090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key ...
	I0729 18:17:10.851410   70090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key: {Name:mkb3a5f5804ca5b45b3607518901335e632f3039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:10.851597   70090 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:17:10.851634   70090 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:17:10.851644   70090 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:17:10.851668   70090 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:17:10.851693   70090 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:17:10.851714   70090 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:17:10.851751   70090 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:17:10.852450   70090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:17:10.895047   70090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:17:10.932036   70090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:17:10.973835   70090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:17:11.005246   70090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 18:17:11.038976   70090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:17:11.070482   70090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:17:11.100268   70090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:17:11.128012   70090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:17:11.153950   70090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:17:11.182089   70090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:17:11.219600   70090 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:17:11.246792   70090 ssh_runner.go:195] Run: openssl version
	I0729 18:17:11.254740   70090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:17:11.268678   70090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:17:11.273927   70090 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:17:11.273991   70090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:17:11.280945   70090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:17:11.294620   70090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:17:11.308170   70090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:17:11.313706   70090 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:17:11.313784   70090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:17:11.320576   70090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:17:11.341049   70090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:17:11.356220   70090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:17:11.361187   70090 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:17:11.361247   70090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:17:11.368331   70090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:17:11.381720   70090 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:17:11.386695   70090 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 18:17:11.386741   70090 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:17:11.386838   70090 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:17:11.386879   70090 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:17:11.435451   70090 cri.go:89] found id: ""
	I0729 18:17:11.435523   70090 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:17:11.448451   70090 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:17:11.460098   70090 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:17:11.473644   70090 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:17:11.473667   70090 kubeadm.go:157] found existing configuration files:
	
	I0729 18:17:11.473721   70090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:17:11.483674   70090 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:17:11.483724   70090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:17:11.494342   70090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:17:11.506067   70090 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:17:11.506123   70090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:17:11.522529   70090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:17:11.538090   70090 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:17:11.538142   70090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:17:11.552211   70090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:17:11.570558   70090 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:17:11.570602   70090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:17:11.592548   70090 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:17:11.812343   70090 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:17:11.812447   70090 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:17:11.969323   70090 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:17:11.969487   70090 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:17:11.969625   70090 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:17:12.190931   70090 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:17:12.259147   70090 out.go:204]   - Generating certificates and keys ...
	I0729 18:17:12.259291   70090 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:17:12.259400   70090 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:17:12.320109   70090 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 18:17:12.507671   70090 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 18:17:12.786755   70090 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 18:17:13.145031   70090 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 18:17:13.231958   70090 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 18:17:13.232199   70090 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-386663] and IPs [192.168.50.70 127.0.0.1 ::1]
	I0729 18:17:13.572017   70090 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 18:17:13.572230   70090 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-386663] and IPs [192.168.50.70 127.0.0.1 ::1]
	I0729 18:17:13.699756   70090 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 18:17:13.796681   70090 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 18:17:13.932306   70090 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 18:17:13.932548   70090 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:17:14.065275   70090 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:17:14.224429   70090 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:17:14.439392   70090 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:17:14.617250   70090 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:17:14.633875   70090 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:17:14.636048   70090 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:17:14.636103   70090 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:17:14.779897   70090 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:17:14.781626   70090 out.go:204]   - Booting up control plane ...
	I0729 18:17:14.781743   70090 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:17:14.787730   70090 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:17:14.789593   70090 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:17:14.790850   70090 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:17:14.794828   70090 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:17:54.733798   70090 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:17:54.734446   70090 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:17:54.734959   70090 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:17:59.734965   70090 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:17:59.735262   70090 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:18:09.734658   70090 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:18:09.734950   70090 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:18:29.734609   70090 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:18:29.734900   70090 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:19:09.736631   70090 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:19:09.736807   70090 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:19:09.736830   70090 kubeadm.go:310] 
	I0729 18:19:09.736912   70090 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:19:09.736974   70090 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:19:09.736981   70090 kubeadm.go:310] 
	I0729 18:19:09.737010   70090 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:19:09.737066   70090 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:19:09.737212   70090 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:19:09.737226   70090 kubeadm.go:310] 
	I0729 18:19:09.737357   70090 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:19:09.737428   70090 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:19:09.737462   70090 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:19:09.737469   70090 kubeadm.go:310] 
	I0729 18:19:09.737635   70090 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:19:09.737772   70090 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:19:09.737790   70090 kubeadm.go:310] 
	I0729 18:19:09.737954   70090 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:19:09.738085   70090 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:19:09.738192   70090 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:19:09.738292   70090 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:19:09.738301   70090 kubeadm.go:310] 
	I0729 18:19:09.738933   70090 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:19:09.739051   70090 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:19:09.739141   70090 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 18:19:09.739279   70090 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-386663] and IPs [192.168.50.70 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-386663] and IPs [192.168.50.70 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-386663] and IPs [192.168.50.70 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-386663] and IPs [192.168.50.70 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 18:19:09.739339   70090 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:19:11.850453   70090 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.111086063s)
	I0729 18:19:11.850532   70090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:19:11.865044   70090 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:19:11.874647   70090 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:19:11.874662   70090 kubeadm.go:157] found existing configuration files:
	
	I0729 18:19:11.874708   70090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:19:11.883910   70090 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:19:11.883953   70090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:19:11.893028   70090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:19:11.901787   70090 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:19:11.901826   70090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:19:11.910921   70090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:19:11.919820   70090 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:19:11.919873   70090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:19:11.928911   70090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:19:11.937731   70090 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:19:11.937771   70090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:19:11.947315   70090 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:19:12.185578   70090 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:21:08.109122   70090 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:21:08.109216   70090 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 18:21:08.111061   70090 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:21:08.111124   70090 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:21:08.111208   70090 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:21:08.111289   70090 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:21:08.111424   70090 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:21:08.111511   70090 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:21:08.113218   70090 out.go:204]   - Generating certificates and keys ...
	I0729 18:21:08.113284   70090 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:21:08.113342   70090 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:21:08.113424   70090 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:21:08.113509   70090 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:21:08.113589   70090 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:21:08.113665   70090 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:21:08.113754   70090 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:21:08.113822   70090 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:21:08.113914   70090 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:21:08.114001   70090 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:21:08.114051   70090 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:21:08.114131   70090 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:21:08.114208   70090 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:21:08.114257   70090 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:21:08.114325   70090 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:21:08.114425   70090 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:21:08.114577   70090 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:21:08.114704   70090 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:21:08.114756   70090 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:21:08.114811   70090 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:21:08.116257   70090 out.go:204]   - Booting up control plane ...
	I0729 18:21:08.116333   70090 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:21:08.116422   70090 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:21:08.116510   70090 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:21:08.116577   70090 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:21:08.116782   70090 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:21:08.116838   70090 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:21:08.116918   70090 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:21:08.117126   70090 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:21:08.117229   70090 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:21:08.117435   70090 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:21:08.117535   70090 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:21:08.117717   70090 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:21:08.117797   70090 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:21:08.117967   70090 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:21:08.118062   70090 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:21:08.118254   70090 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:21:08.118264   70090 kubeadm.go:310] 
	I0729 18:21:08.118297   70090 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:21:08.118335   70090 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:21:08.118341   70090 kubeadm.go:310] 
	I0729 18:21:08.118384   70090 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:21:08.118412   70090 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:21:08.118523   70090 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:21:08.118534   70090 kubeadm.go:310] 
	I0729 18:21:08.118677   70090 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:21:08.118728   70090 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:21:08.118770   70090 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:21:08.118778   70090 kubeadm.go:310] 
	I0729 18:21:08.118926   70090 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:21:08.119042   70090 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:21:08.119056   70090 kubeadm.go:310] 
	I0729 18:21:08.119212   70090 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:21:08.119336   70090 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:21:08.119441   70090 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:21:08.119539   70090 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:21:08.119565   70090 kubeadm.go:310] 
	I0729 18:21:08.119601   70090 kubeadm.go:394] duration metric: took 3m56.732861151s to StartCluster
	I0729 18:21:08.119647   70090 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:21:08.119695   70090 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:21:08.169606   70090 cri.go:89] found id: ""
	I0729 18:21:08.169634   70090 logs.go:276] 0 containers: []
	W0729 18:21:08.169643   70090 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:21:08.169648   70090 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:21:08.169711   70090 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:21:08.203778   70090 cri.go:89] found id: ""
	I0729 18:21:08.203805   70090 logs.go:276] 0 containers: []
	W0729 18:21:08.203812   70090 logs.go:278] No container was found matching "etcd"
	I0729 18:21:08.203818   70090 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:21:08.203890   70090 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:21:08.237266   70090 cri.go:89] found id: ""
	I0729 18:21:08.237289   70090 logs.go:276] 0 containers: []
	W0729 18:21:08.237297   70090 logs.go:278] No container was found matching "coredns"
	I0729 18:21:08.237303   70090 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:21:08.237348   70090 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:21:08.273136   70090 cri.go:89] found id: ""
	I0729 18:21:08.273167   70090 logs.go:276] 0 containers: []
	W0729 18:21:08.273177   70090 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:21:08.273184   70090 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:21:08.273245   70090 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:21:08.307381   70090 cri.go:89] found id: ""
	I0729 18:21:08.307404   70090 logs.go:276] 0 containers: []
	W0729 18:21:08.307410   70090 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:21:08.307416   70090 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:21:08.307473   70090 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:21:08.340477   70090 cri.go:89] found id: ""
	I0729 18:21:08.340503   70090 logs.go:276] 0 containers: []
	W0729 18:21:08.340510   70090 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:21:08.340522   70090 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:21:08.340567   70090 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:21:08.373142   70090 cri.go:89] found id: ""
	I0729 18:21:08.373169   70090 logs.go:276] 0 containers: []
	W0729 18:21:08.373180   70090 logs.go:278] No container was found matching "kindnet"
	I0729 18:21:08.373191   70090 logs.go:123] Gathering logs for kubelet ...
	I0729 18:21:08.373206   70090 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:21:08.423725   70090 logs.go:123] Gathering logs for dmesg ...
	I0729 18:21:08.423758   70090 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:21:08.440827   70090 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:21:08.440854   70090 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:21:08.590065   70090 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:21:08.590085   70090 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:21:08.590096   70090 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:21:08.699690   70090 logs.go:123] Gathering logs for container status ...
	I0729 18:21:08.699727   70090 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 18:21:08.738163   70090 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 18:21:08.738213   70090 out.go:239] * 
	* 
	W0729 18:21:08.738291   70090 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:21:08.738314   70090 out.go:239] * 
	* 
	W0729 18:21:08.739230   70090 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:21:08.742708   70090 out.go:177] 
	W0729 18:21:08.743878   70090 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:21:08.743936   70090 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 18:21:08.743960   70090 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 18:21:08.745357   70090 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-386663 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386663 -n old-k8s-version-386663
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386663 -n old-k8s-version-386663: exit status 6 (232.08703ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:21:09.012716   77011 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-386663" does not appear in /home/jenkins/minikube-integration/19345-11206/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-386663" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (283.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-888056 --alsologtostderr -v=3
E0729 18:19:15.490128   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:19:15.528333   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:19:15.533609   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:19:15.543832   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:19:15.564075   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:19:15.604377   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:19:15.684647   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:19:15.845354   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:19:16.165914   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:19:16.806750   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:19:18.087315   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:19:20.648475   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:19:25.769244   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-888056 --alsologtostderr -v=3: exit status 82 (2m0.523885281s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-888056"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:19:10.946533   76293 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:19:10.946813   76293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:19:10.946823   76293 out.go:304] Setting ErrFile to fd 2...
	I0729 18:19:10.946827   76293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:19:10.947018   76293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:19:10.947227   76293 out.go:298] Setting JSON to false
	I0729 18:19:10.947305   76293 mustload.go:65] Loading cluster: no-preload-888056
	I0729 18:19:10.947617   76293 config.go:182] Loaded profile config "no-preload-888056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:19:10.947682   76293 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/config.json ...
	I0729 18:19:10.947849   76293 mustload.go:65] Loading cluster: no-preload-888056
	I0729 18:19:10.947954   76293 config.go:182] Loaded profile config "no-preload-888056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:19:10.947976   76293 stop.go:39] StopHost: no-preload-888056
	I0729 18:19:10.948345   76293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:19:10.948391   76293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:19:10.963725   76293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0729 18:19:10.964143   76293 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:19:10.964718   76293 main.go:141] libmachine: Using API Version  1
	I0729 18:19:10.964746   76293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:19:10.965082   76293 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:19:10.967275   76293 out.go:177] * Stopping node "no-preload-888056"  ...
	I0729 18:19:10.968346   76293 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 18:19:10.968368   76293 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:19:10.968582   76293 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 18:19:10.968612   76293 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:19:10.971469   76293 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:19:10.971858   76293 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:19:10.971893   76293 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:19:10.972027   76293 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:19:10.972192   76293 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:19:10.972328   76293 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:19:10.972479   76293 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:19:11.098793   76293 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 18:19:11.158228   76293 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 18:19:11.223410   76293 main.go:141] libmachine: Stopping "no-preload-888056"...
	I0729 18:19:11.223444   76293 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:19:11.225336   76293 main.go:141] libmachine: (no-preload-888056) Calling .Stop
	I0729 18:19:11.229146   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 0/120
	I0729 18:19:12.230454   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 1/120
	I0729 18:19:13.231755   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 2/120
	I0729 18:19:14.233057   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 3/120
	I0729 18:19:15.234420   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 4/120
	I0729 18:19:16.236717   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 5/120
	I0729 18:19:17.238125   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 6/120
	I0729 18:19:18.239523   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 7/120
	I0729 18:19:19.241153   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 8/120
	I0729 18:19:20.242435   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 9/120
	I0729 18:19:21.243662   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 10/120
	I0729 18:19:22.245031   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 11/120
	I0729 18:19:23.246618   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 12/120
	I0729 18:19:24.247965   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 13/120
	I0729 18:19:25.249375   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 14/120
	I0729 18:19:26.250920   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 15/120
	I0729 18:19:27.252886   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 16/120
	I0729 18:19:28.255022   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 17/120
	I0729 18:19:29.256740   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 18/120
	I0729 18:19:30.258174   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 19/120
	I0729 18:19:31.260430   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 20/120
	I0729 18:19:32.262350   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 21/120
	I0729 18:19:33.264430   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 22/120
	I0729 18:19:34.266303   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 23/120
	I0729 18:19:35.267760   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 24/120
	I0729 18:19:36.269623   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 25/120
	I0729 18:19:37.271169   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 26/120
	I0729 18:19:38.272679   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 27/120
	I0729 18:19:39.274020   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 28/120
	I0729 18:19:40.275799   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 29/120
	I0729 18:19:41.277938   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 30/120
	I0729 18:19:42.279570   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 31/120
	I0729 18:19:43.281421   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 32/120
	I0729 18:19:44.283036   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 33/120
	I0729 18:19:45.285113   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 34/120
	I0729 18:19:46.286699   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 35/120
	I0729 18:19:47.288755   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 36/120
	I0729 18:19:48.290024   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 37/120
	I0729 18:19:49.291443   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 38/120
	I0729 18:19:50.292951   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 39/120
	I0729 18:19:51.295106   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 40/120
	I0729 18:19:52.296843   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 41/120
	I0729 18:19:53.298076   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 42/120
	I0729 18:19:54.299369   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 43/120
	I0729 18:19:55.300652   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 44/120
	I0729 18:19:56.302802   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 45/120
	I0729 18:19:57.304035   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 46/120
	I0729 18:19:58.305366   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 47/120
	I0729 18:19:59.306768   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 48/120
	I0729 18:20:00.307999   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 49/120
	I0729 18:20:01.310238   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 50/120
	I0729 18:20:02.311690   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 51/120
	I0729 18:20:03.313101   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 52/120
	I0729 18:20:04.314328   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 53/120
	I0729 18:20:05.315704   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 54/120
	I0729 18:20:06.317858   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 55/120
	I0729 18:20:07.320279   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 56/120
	I0729 18:20:08.321637   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 57/120
	I0729 18:20:09.323183   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 58/120
	I0729 18:20:10.324538   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 59/120
	I0729 18:20:11.326787   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 60/120
	I0729 18:20:12.328089   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 61/120
	I0729 18:20:13.329328   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 62/120
	I0729 18:20:14.330768   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 63/120
	I0729 18:20:15.332089   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 64/120
	I0729 18:20:16.333946   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 65/120
	I0729 18:20:17.335383   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 66/120
	I0729 18:20:18.336697   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 67/120
	I0729 18:20:19.338200   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 68/120
	I0729 18:20:20.339488   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 69/120
	I0729 18:20:21.341770   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 70/120
	I0729 18:20:22.343070   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 71/120
	I0729 18:20:23.344316   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 72/120
	I0729 18:20:24.345680   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 73/120
	I0729 18:20:25.346982   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 74/120
	I0729 18:20:26.349001   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 75/120
	I0729 18:20:27.350580   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 76/120
	I0729 18:20:28.352029   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 77/120
	I0729 18:20:29.353436   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 78/120
	I0729 18:20:30.354978   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 79/120
	I0729 18:20:31.357367   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 80/120
	I0729 18:20:32.359246   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 81/120
	I0729 18:20:33.360808   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 82/120
	I0729 18:20:34.362265   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 83/120
	I0729 18:20:35.363730   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 84/120
	I0729 18:20:36.365953   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 85/120
	I0729 18:20:37.367258   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 86/120
	I0729 18:20:38.368810   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 87/120
	I0729 18:20:39.370183   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 88/120
	I0729 18:20:40.371573   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 89/120
	I0729 18:20:41.372938   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 90/120
	I0729 18:20:42.374598   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 91/120
	I0729 18:20:43.376038   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 92/120
	I0729 18:20:44.377401   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 93/120
	I0729 18:20:45.378980   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 94/120
	I0729 18:20:46.381041   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 95/120
	I0729 18:20:47.382646   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 96/120
	I0729 18:20:48.384121   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 97/120
	I0729 18:20:49.385428   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 98/120
	I0729 18:20:50.387029   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 99/120
	I0729 18:20:51.389432   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 100/120
	I0729 18:20:52.390935   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 101/120
	I0729 18:20:53.392488   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 102/120
	I0729 18:20:54.393819   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 103/120
	I0729 18:20:55.395487   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 104/120
	I0729 18:20:56.397741   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 105/120
	I0729 18:20:57.398839   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 106/120
	I0729 18:20:58.400441   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 107/120
	I0729 18:20:59.401893   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 108/120
	I0729 18:21:00.403532   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 109/120
	I0729 18:21:01.405068   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 110/120
	I0729 18:21:02.406535   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 111/120
	I0729 18:21:03.407974   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 112/120
	I0729 18:21:04.409421   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 113/120
	I0729 18:21:05.410865   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 114/120
	I0729 18:21:06.413085   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 115/120
	I0729 18:21:07.414527   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 116/120
	I0729 18:21:08.415928   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 117/120
	I0729 18:21:09.418208   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 118/120
	I0729 18:21:10.419710   76293 main.go:141] libmachine: (no-preload-888056) Waiting for machine to stop 119/120
	I0729 18:21:11.420591   76293 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 18:21:11.420666   76293 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 18:21:11.422479   76293 out.go:177] 
	W0729 18:21:11.423708   76293 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 18:21:11.423721   76293 out.go:239] * 
	* 
	W0729 18:21:11.427050   76293 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:21:11.428880   76293 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-888056 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-888056 -n no-preload-888056
E0729 18:21:12.234248   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:21:17.702338   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
E0729 18:21:18.371469   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-888056 -n no-preload-888056: exit status 3 (18.539684654s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:21:29.970701   77158 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.80:22: connect: no route to host
	E0729 18:21:29.970724   77158 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.80:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-888056" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-409322 --alsologtostderr -v=3
E0729 18:19:50.312268   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:19:50.317543   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:19:50.327761   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:19:50.348023   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:19:50.388312   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:19:50.468684   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:19:50.629258   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:19:50.949833   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:19:51.590647   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:19:52.871339   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:19:55.432083   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:19:56.451225   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:19:56.490439   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-409322 --alsologtostderr -v=3: exit status 82 (2m0.493803705s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-409322"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:19:38.299908   76491 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:19:38.300028   76491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:19:38.300038   76491 out.go:304] Setting ErrFile to fd 2...
	I0729 18:19:38.300045   76491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:19:38.300216   76491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:19:38.300456   76491 out.go:298] Setting JSON to false
	I0729 18:19:38.300552   76491 mustload.go:65] Loading cluster: embed-certs-409322
	I0729 18:19:38.300882   76491 config.go:182] Loaded profile config "embed-certs-409322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:19:38.300965   76491 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/config.json ...
	I0729 18:19:38.301144   76491 mustload.go:65] Loading cluster: embed-certs-409322
	I0729 18:19:38.301266   76491 config.go:182] Loaded profile config "embed-certs-409322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:19:38.301304   76491 stop.go:39] StopHost: embed-certs-409322
	I0729 18:19:38.301671   76491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:19:38.301719   76491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:19:38.316310   76491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0729 18:19:38.316815   76491 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:19:38.317394   76491 main.go:141] libmachine: Using API Version  1
	I0729 18:19:38.317422   76491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:19:38.317708   76491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:19:38.320046   76491 out.go:177] * Stopping node "embed-certs-409322"  ...
	I0729 18:19:38.321347   76491 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 18:19:38.321382   76491 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:19:38.321721   76491 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 18:19:38.321756   76491 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:19:38.324838   76491 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:19:38.325345   76491 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:18:05 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:19:38.325374   76491 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:19:38.325526   76491 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:19:38.325671   76491 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:19:38.325813   76491 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:19:38.325923   76491 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:19:38.423919   76491 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 18:19:38.493497   76491 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 18:19:38.553432   76491 main.go:141] libmachine: Stopping "embed-certs-409322"...
	I0729 18:19:38.553474   76491 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:19:38.555102   76491 main.go:141] libmachine: (embed-certs-409322) Calling .Stop
	I0729 18:19:38.558737   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 0/120
	I0729 18:19:39.560813   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 1/120
	I0729 18:19:40.562043   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 2/120
	I0729 18:19:41.563986   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 3/120
	I0729 18:19:42.565390   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 4/120
	I0729 18:19:43.567365   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 5/120
	I0729 18:19:44.568969   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 6/120
	I0729 18:19:45.570676   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 7/120
	I0729 18:19:46.572118   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 8/120
	I0729 18:19:47.573386   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 9/120
	I0729 18:19:48.575713   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 10/120
	I0729 18:19:49.577423   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 11/120
	I0729 18:19:50.579019   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 12/120
	I0729 18:19:51.580352   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 13/120
	I0729 18:19:52.581759   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 14/120
	I0729 18:19:53.583626   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 15/120
	I0729 18:19:54.584778   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 16/120
	I0729 18:19:55.586266   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 17/120
	I0729 18:19:56.587720   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 18/120
	I0729 18:19:57.589100   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 19/120
	I0729 18:19:58.591598   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 20/120
	I0729 18:19:59.593459   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 21/120
	I0729 18:20:00.594749   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 22/120
	I0729 18:20:01.596050   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 23/120
	I0729 18:20:02.597528   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 24/120
	I0729 18:20:03.599308   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 25/120
	I0729 18:20:04.601374   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 26/120
	I0729 18:20:05.603208   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 27/120
	I0729 18:20:06.605546   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 28/120
	I0729 18:20:07.606735   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 29/120
	I0729 18:20:08.608839   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 30/120
	I0729 18:20:09.610340   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 31/120
	I0729 18:20:10.611651   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 32/120
	I0729 18:20:11.613209   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 33/120
	I0729 18:20:12.614383   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 34/120
	I0729 18:20:13.616038   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 35/120
	I0729 18:20:14.617431   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 36/120
	I0729 18:20:15.618553   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 37/120
	I0729 18:20:16.619943   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 38/120
	I0729 18:20:17.621055   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 39/120
	I0729 18:20:18.623254   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 40/120
	I0729 18:20:19.624478   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 41/120
	I0729 18:20:20.625635   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 42/120
	I0729 18:20:21.626771   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 43/120
	I0729 18:20:22.628010   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 44/120
	I0729 18:20:23.630248   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 45/120
	I0729 18:20:24.631661   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 46/120
	I0729 18:20:25.633008   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 47/120
	I0729 18:20:26.634387   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 48/120
	I0729 18:20:27.635745   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 49/120
	I0729 18:20:28.638029   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 50/120
	I0729 18:20:29.639411   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 51/120
	I0729 18:20:30.640694   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 52/120
	I0729 18:20:31.641972   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 53/120
	I0729 18:20:32.643478   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 54/120
	I0729 18:20:33.645255   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 55/120
	I0729 18:20:34.646634   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 56/120
	I0729 18:20:35.647945   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 57/120
	I0729 18:20:36.649482   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 58/120
	I0729 18:20:37.650901   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 59/120
	I0729 18:20:38.652787   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 60/120
	I0729 18:20:39.654154   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 61/120
	I0729 18:20:40.655413   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 62/120
	I0729 18:20:41.656834   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 63/120
	I0729 18:20:42.658233   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 64/120
	I0729 18:20:43.660186   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 65/120
	I0729 18:20:44.661846   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 66/120
	I0729 18:20:45.663254   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 67/120
	I0729 18:20:46.664665   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 68/120
	I0729 18:20:47.666075   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 69/120
	I0729 18:20:48.668338   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 70/120
	I0729 18:20:49.669635   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 71/120
	I0729 18:20:50.671041   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 72/120
	I0729 18:20:51.672365   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 73/120
	I0729 18:20:52.673990   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 74/120
	I0729 18:20:53.676502   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 75/120
	I0729 18:20:54.677996   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 76/120
	I0729 18:20:55.679459   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 77/120
	I0729 18:20:56.680882   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 78/120
	I0729 18:20:57.682331   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 79/120
	I0729 18:20:58.684074   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 80/120
	I0729 18:20:59.685469   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 81/120
	I0729 18:21:00.686930   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 82/120
	I0729 18:21:01.688787   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 83/120
	I0729 18:21:02.690155   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 84/120
	I0729 18:21:03.692316   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 85/120
	I0729 18:21:04.693622   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 86/120
	I0729 18:21:05.695079   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 87/120
	I0729 18:21:06.696518   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 88/120
	I0729 18:21:07.697756   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 89/120
	I0729 18:21:08.700108   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 90/120
	I0729 18:21:09.701621   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 91/120
	I0729 18:21:10.703023   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 92/120
	I0729 18:21:11.704332   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 93/120
	I0729 18:21:12.706111   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 94/120
	I0729 18:21:13.708261   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 95/120
	I0729 18:21:14.709442   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 96/120
	I0729 18:21:15.710871   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 97/120
	I0729 18:21:16.712222   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 98/120
	I0729 18:21:17.713594   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 99/120
	I0729 18:21:18.715934   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 100/120
	I0729 18:21:19.717322   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 101/120
	I0729 18:21:20.718789   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 102/120
	I0729 18:21:21.720278   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 103/120
	I0729 18:21:22.721724   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 104/120
	I0729 18:21:23.723768   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 105/120
	I0729 18:21:24.725091   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 106/120
	I0729 18:21:25.726507   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 107/120
	I0729 18:21:26.727982   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 108/120
	I0729 18:21:27.729445   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 109/120
	I0729 18:21:28.731222   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 110/120
	I0729 18:21:29.732612   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 111/120
	I0729 18:21:30.734157   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 112/120
	I0729 18:21:31.735593   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 113/120
	I0729 18:21:32.737032   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 114/120
	I0729 18:21:33.739007   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 115/120
	I0729 18:21:34.740407   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 116/120
	I0729 18:21:35.741734   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 117/120
	I0729 18:21:36.743035   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 118/120
	I0729 18:21:37.744337   76491 main.go:141] libmachine: (embed-certs-409322) Waiting for machine to stop 119/120
	I0729 18:21:38.745675   76491 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 18:21:38.745737   76491 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 18:21:38.747657   76491 out.go:177] 
	W0729 18:21:38.748933   76491 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 18:21:38.748950   76491 out.go:239] * 
	* 
	W0729 18:21:38.752137   76491 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:21:38.753277   76491 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-409322 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-409322 -n embed-certs-409322
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-409322 -n embed-certs-409322: exit status 3 (18.606751601s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:21:57.362674   77317 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host
	E0729 18:21:57.362693   77317 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-409322" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-502055 --alsologtostderr -v=3
E0729 18:20:10.793114   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:20:31.273630   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:20:37.451615   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:20:57.219733   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
E0729 18:20:57.224991   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
E0729 18:20:57.235274   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
E0729 18:20:57.255613   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
E0729 18:20:57.296014   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
E0729 18:20:57.376405   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
E0729 18:20:57.537243   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
E0729 18:20:57.857803   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
E0729 18:20:58.498854   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
E0729 18:20:59.779851   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
E0729 18:21:02.340969   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
E0729 18:21:07.461921   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-502055 --alsologtostderr -v=3: exit status 82 (2m0.503623813s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-502055"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:20:08.250060   76749 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:20:08.250403   76749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:20:08.250419   76749 out.go:304] Setting ErrFile to fd 2...
	I0729 18:20:08.250427   76749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:20:08.250667   76749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:20:08.250892   76749 out.go:298] Setting JSON to false
	I0729 18:20:08.250971   76749 mustload.go:65] Loading cluster: default-k8s-diff-port-502055
	I0729 18:20:08.251285   76749 config.go:182] Loaded profile config "default-k8s-diff-port-502055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:20:08.251356   76749 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/config.json ...
	I0729 18:20:08.251519   76749 mustload.go:65] Loading cluster: default-k8s-diff-port-502055
	I0729 18:20:08.251617   76749 config.go:182] Loaded profile config "default-k8s-diff-port-502055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:20:08.251643   76749 stop.go:39] StopHost: default-k8s-diff-port-502055
	I0729 18:20:08.252214   76749 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:20:08.252260   76749 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:20:08.267108   76749 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I0729 18:20:08.267663   76749 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:20:08.268220   76749 main.go:141] libmachine: Using API Version  1
	I0729 18:20:08.268244   76749 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:20:08.268575   76749 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:20:08.271063   76749 out.go:177] * Stopping node "default-k8s-diff-port-502055"  ...
	I0729 18:20:08.272671   76749 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 18:20:08.272722   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:20:08.273055   76749 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 18:20:08.273092   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:20:08.276293   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:20:08.276951   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:18:37 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:20:08.276976   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:20:08.277206   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:20:08.277399   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:20:08.277555   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:20:08.277715   76749 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:20:08.357943   76749 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 18:20:08.449006   76749 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 18:20:08.508514   76749 main.go:141] libmachine: Stopping "default-k8s-diff-port-502055"...
	I0729 18:20:08.508542   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:20:08.510149   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Stop
	I0729 18:20:08.513891   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 0/120
	I0729 18:20:09.515307   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 1/120
	I0729 18:20:10.516694   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 2/120
	I0729 18:20:11.518132   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 3/120
	I0729 18:20:12.519632   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 4/120
	I0729 18:20:13.521579   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 5/120
	I0729 18:20:14.523044   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 6/120
	I0729 18:20:15.524363   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 7/120
	I0729 18:20:16.525755   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 8/120
	I0729 18:20:17.526929   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 9/120
	I0729 18:20:18.528398   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 10/120
	I0729 18:20:19.529678   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 11/120
	I0729 18:20:20.531195   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 12/120
	I0729 18:20:21.532451   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 13/120
	I0729 18:20:22.534119   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 14/120
	I0729 18:20:23.536114   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 15/120
	I0729 18:20:24.537675   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 16/120
	I0729 18:20:25.539060   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 17/120
	I0729 18:20:26.540531   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 18/120
	I0729 18:20:27.541987   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 19/120
	I0729 18:20:28.543587   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 20/120
	I0729 18:20:29.544995   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 21/120
	I0729 18:20:30.546289   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 22/120
	I0729 18:20:31.547745   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 23/120
	I0729 18:20:32.549519   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 24/120
	I0729 18:20:33.551667   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 25/120
	I0729 18:20:34.553126   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 26/120
	I0729 18:20:35.554593   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 27/120
	I0729 18:20:36.556061   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 28/120
	I0729 18:20:37.557473   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 29/120
	I0729 18:20:38.559781   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 30/120
	I0729 18:20:39.561189   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 31/120
	I0729 18:20:40.562609   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 32/120
	I0729 18:20:41.564404   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 33/120
	I0729 18:20:42.565885   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 34/120
	I0729 18:20:43.568000   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 35/120
	I0729 18:20:44.569274   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 36/120
	I0729 18:20:45.571557   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 37/120
	I0729 18:20:46.572885   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 38/120
	I0729 18:20:47.574490   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 39/120
	I0729 18:20:48.576670   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 40/120
	I0729 18:20:49.577990   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 41/120
	I0729 18:20:50.579302   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 42/120
	I0729 18:20:51.580842   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 43/120
	I0729 18:20:52.582311   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 44/120
	I0729 18:20:53.584513   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 45/120
	I0729 18:20:54.586037   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 46/120
	I0729 18:20:55.587494   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 47/120
	I0729 18:20:56.588720   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 48/120
	I0729 18:20:57.590115   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 49/120
	I0729 18:20:58.592304   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 50/120
	I0729 18:20:59.593776   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 51/120
	I0729 18:21:00.595277   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 52/120
	I0729 18:21:01.596991   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 53/120
	I0729 18:21:02.598395   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 54/120
	I0729 18:21:03.600584   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 55/120
	I0729 18:21:04.603020   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 56/120
	I0729 18:21:05.604461   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 57/120
	I0729 18:21:06.606250   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 58/120
	I0729 18:21:07.607585   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 59/120
	I0729 18:21:08.609717   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 60/120
	I0729 18:21:09.611364   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 61/120
	I0729 18:21:10.612746   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 62/120
	I0729 18:21:11.614058   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 63/120
	I0729 18:21:12.615394   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 64/120
	I0729 18:21:13.617351   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 65/120
	I0729 18:21:14.618741   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 66/120
	I0729 18:21:15.620149   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 67/120
	I0729 18:21:16.621610   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 68/120
	I0729 18:21:17.622970   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 69/120
	I0729 18:21:18.624816   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 70/120
	I0729 18:21:19.626425   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 71/120
	I0729 18:21:20.628308   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 72/120
	I0729 18:21:21.629861   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 73/120
	I0729 18:21:22.631523   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 74/120
	I0729 18:21:23.633539   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 75/120
	I0729 18:21:24.635061   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 76/120
	I0729 18:21:25.636437   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 77/120
	I0729 18:21:26.637991   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 78/120
	I0729 18:21:27.639254   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 79/120
	I0729 18:21:28.641436   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 80/120
	I0729 18:21:29.642962   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 81/120
	I0729 18:21:30.645201   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 82/120
	I0729 18:21:31.646681   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 83/120
	I0729 18:21:32.648126   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 84/120
	I0729 18:21:33.650219   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 85/120
	I0729 18:21:34.651642   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 86/120
	I0729 18:21:35.652977   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 87/120
	I0729 18:21:36.654402   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 88/120
	I0729 18:21:37.655665   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 89/120
	I0729 18:21:38.657768   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 90/120
	I0729 18:21:39.659249   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 91/120
	I0729 18:21:40.660628   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 92/120
	I0729 18:21:41.661855   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 93/120
	I0729 18:21:42.663390   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 94/120
	I0729 18:21:43.665620   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 95/120
	I0729 18:21:44.667027   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 96/120
	I0729 18:21:45.668520   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 97/120
	I0729 18:21:46.669929   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 98/120
	I0729 18:21:47.671419   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 99/120
	I0729 18:21:48.672819   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 100/120
	I0729 18:21:49.674226   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 101/120
	I0729 18:21:50.675742   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 102/120
	I0729 18:21:51.677149   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 103/120
	I0729 18:21:52.678698   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 104/120
	I0729 18:21:53.680770   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 105/120
	I0729 18:21:54.682160   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 106/120
	I0729 18:21:55.683593   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 107/120
	I0729 18:21:56.685169   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 108/120
	I0729 18:21:57.686466   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 109/120
	I0729 18:21:58.688774   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 110/120
	I0729 18:21:59.690067   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 111/120
	I0729 18:22:00.691607   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 112/120
	I0729 18:22:01.693048   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 113/120
	I0729 18:22:02.694558   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 114/120
	I0729 18:22:03.696606   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 115/120
	I0729 18:22:04.698172   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 116/120
	I0729 18:22:05.699800   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 117/120
	I0729 18:22:06.700932   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 118/120
	I0729 18:22:07.702325   76749 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for machine to stop 119/120
	I0729 18:22:08.703135   76749 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 18:22:08.703203   76749 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 18:22:08.705169   76749 out.go:177] 
	W0729 18:22:08.706349   76749 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 18:22:08.706383   76749 out.go:239] * 
	* 
	W0729 18:22:08.709696   76749 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:22:08.710962   76749 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-502055 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-502055 -n default-k8s-diff-port-502055
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-502055 -n default-k8s-diff-port-502055: exit status 3 (18.602306305s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:22:27.314767   77597 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.244:22: connect: no route to host
	E0729 18:22:27.314789   77597 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.244:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-502055" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-386663 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-386663 create -f testdata/busybox.yaml: exit status 1 (43.240447ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-386663" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-386663 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386663 -n old-k8s-version-386663
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386663 -n old-k8s-version-386663: exit status 6 (243.166888ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:21:09.300783   77051 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-386663" does not appear in /home/jenkins/minikube-integration/19345-11206/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-386663" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386663 -n old-k8s-version-386663
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386663 -n old-k8s-version-386663: exit status 6 (226.238407ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:21:09.528340   77080 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-386663" does not appear in /home/jenkins/minikube-integration/19345-11206/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-386663" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (95.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-386663 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-386663 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m34.91457604s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-386663 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-386663 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-386663 describe deploy/metrics-server -n kube-system: exit status 1 (43.159322ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-386663" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-386663 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386663 -n old-k8s-version-386663
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386663 -n old-k8s-version-386663: exit status 6 (219.355152ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:22:44.706348   77948 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-386663" does not appear in /home/jenkins/minikube-integration/19345-11206/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-386663" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (95.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-888056 -n no-preload-888056
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-888056 -n no-preload-888056: exit status 3 (3.168156069s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:21:33.138745   77236 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.80:22: connect: no route to host
	E0729 18:21:33.138768   77236 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.80:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-888056 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0729 18:21:38.183051   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-888056 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152709607s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.80:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-888056 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-888056 -n no-preload-888056
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-888056 -n no-preload-888056: exit status 3 (3.063036098s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:21:42.354785   77347 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.80:22: connect: no route to host
	E0729 18:21:42.354805   77347 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.80:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-888056" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-409322 -n embed-certs-409322
E0729 18:21:59.219816   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:21:59.372177   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:21:59.408415   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-409322 -n embed-certs-409322: exit status 3 (3.168245472s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:22:00.530708   77485 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host
	E0729 18:22:00.530730   77485 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-409322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0729 18:22:04.340999   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:22:04.529615   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-409322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152590656s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-409322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-409322 -n embed-certs-409322
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-409322 -n embed-certs-409322: exit status 3 (3.062995191s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:22:09.746822   77566 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host
	E0729 18:22:09.746845   77566 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-409322" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-502055 -n default-k8s-diff-port-502055
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-502055 -n default-k8s-diff-port-502055: exit status 3 (3.167507569s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:22:30.482709   77747 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.244:22: connect: no route to host
	E0729 18:22:30.482729   77747 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.244:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-502055 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0729 18:22:34.154642   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:22:35.062254   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:22:35.250626   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-502055 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154442177s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.244:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-502055 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-502055 -n default-k8s-diff-port-502055
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-502055 -n default-k8s-diff-port-502055: exit status 3 (3.061551433s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:22:39.698753   77828 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.244:22: connect: no route to host
	E0729 18:22:39.698773   77828 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.244:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-502055" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (762.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-386663 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0729 18:22:50.517360   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:22:50.522591   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:22:50.532823   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:22:50.553136   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:22:50.593435   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:22:50.673794   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:22:50.834289   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:22:51.154910   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:22:51.795976   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:22:53.076670   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:22:55.637045   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:23:00.757200   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:23:10.998046   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:23:15.950994   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 18:23:16.023226   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:23:16.211637   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
E0729 18:23:29.676945   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 18:23:31.478472   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:23:34.527446   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:23:41.064665   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
E0729 18:24:02.212367   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:24:12.438954   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:24:15.528542   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:24:37.943729   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:24:38.132140   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
E0729 18:24:43.213109   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:24:50.312320   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:25:17.995143   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:25:34.360092   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:25:57.219649   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
E0729 18:26:24.905615   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
E0729 18:26:52.902222   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 18:26:54.098935   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:26:54.288265   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
E0729 18:27:21.784769   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:27:21.972650   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
E0729 18:27:50.517640   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:28:18.201290   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
E0729 18:28:29.677006   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 18:28:34.527241   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:29:15.528947   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:29:50.312044   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:30:57.219411   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
E0729 18:31:32.724742   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-386663 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m38.677391456s)

                                                
                                                
-- stdout --
	* [old-k8s-version-386663] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19345
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-386663" primary control-plane node in "old-k8s-version-386663" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-386663" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:22:47.218965   78080 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:22:47.219209   78080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:47.219217   78080 out.go:304] Setting ErrFile to fd 2...
	I0729 18:22:47.219222   78080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:47.219370   78080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:22:47.219863   78080 out.go:298] Setting JSON to false
	I0729 18:22:47.220726   78080 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7519,"bootTime":1722269848,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:22:47.220777   78080 start.go:139] virtualization: kvm guest
	I0729 18:22:47.222804   78080 out.go:177] * [old-k8s-version-386663] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:22:47.224119   78080 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 18:22:47.224173   78080 notify.go:220] Checking for updates...
	I0729 18:22:47.226449   78080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:22:47.227676   78080 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:22:47.228809   78080 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:22:47.229914   78080 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:22:47.230906   78080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:22:47.232363   78080 config.go:182] Loaded profile config "old-k8s-version-386663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:22:47.232750   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:47.232814   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:47.247542   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44723
	I0729 18:22:47.247909   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:47.248418   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:22:47.248436   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:47.248786   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:47.248965   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:22:47.250635   78080 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 18:22:47.251760   78080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:22:47.252055   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:47.252098   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:47.266291   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35843
	I0729 18:22:47.266672   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:47.267136   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:22:47.267157   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:47.267492   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:47.267662   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:22:47.303335   78080 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 18:22:47.304503   78080 start.go:297] selected driver: kvm2
	I0729 18:22:47.304513   78080 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:47.304607   78080 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:22:47.305291   78080 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:47.305360   78080 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:22:47.319918   78080 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:22:47.320315   78080 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:22:47.320341   78080 cni.go:84] Creating CNI manager for ""
	I0729 18:22:47.320349   78080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:22:47.320386   78080 start.go:340] cluster config:
	{Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:47.320480   78080 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:47.322357   78080 out.go:177] * Starting "old-k8s-version-386663" primary control-plane node in "old-k8s-version-386663" cluster
	I0729 18:22:47.323622   78080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:22:47.323653   78080 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:22:47.323660   78080 cache.go:56] Caching tarball of preloaded images
	I0729 18:22:47.323740   78080 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:22:47.323761   78080 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 18:22:47.323849   78080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json ...
	I0729 18:22:47.324021   78080 start.go:360] acquireMachinesLock for old-k8s-version-386663: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:26:59.799712   78080 start.go:364] duration metric: took 4m12.475660562s to acquireMachinesLock for "old-k8s-version-386663"
	I0729 18:26:59.799786   78080 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:59.799796   78080 fix.go:54] fixHost starting: 
	I0729 18:26:59.800184   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:59.800215   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:59.816885   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37963
	I0729 18:26:59.817336   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:59.817822   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:26:59.817851   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:59.818283   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:59.818505   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:26:59.818671   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetState
	I0729 18:26:59.820232   78080 fix.go:112] recreateIfNeeded on old-k8s-version-386663: state=Stopped err=<nil>
	I0729 18:26:59.820254   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	W0729 18:26:59.820426   78080 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:59.822140   78080 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-386663" ...
	I0729 18:26:59.823421   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .Start
	I0729 18:26:59.823575   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring networks are active...
	I0729 18:26:59.824264   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring network default is active
	I0729 18:26:59.824641   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring network mk-old-k8s-version-386663 is active
	I0729 18:26:59.825024   78080 main.go:141] libmachine: (old-k8s-version-386663) Getting domain xml...
	I0729 18:26:59.825885   78080 main.go:141] libmachine: (old-k8s-version-386663) Creating domain...
	I0729 18:27:01.104265   78080 main.go:141] libmachine: (old-k8s-version-386663) Waiting to get IP...
	I0729 18:27:01.105349   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.105790   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.105836   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.105761   79098 retry.go:31] will retry after 308.255094ms: waiting for machine to come up
	I0729 18:27:01.415431   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.415999   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.416030   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.415952   79098 retry.go:31] will retry after 236.525723ms: waiting for machine to come up
	I0729 18:27:01.654767   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.655279   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.655312   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.655247   79098 retry.go:31] will retry after 311.010394ms: waiting for machine to come up
	I0729 18:27:01.967850   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.968374   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.968404   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.968333   79098 retry.go:31] will retry after 468.477549ms: waiting for machine to come up
	I0729 18:27:02.438066   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:02.438657   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:02.438686   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:02.438618   79098 retry.go:31] will retry after 601.056921ms: waiting for machine to come up
	I0729 18:27:03.041582   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:03.042097   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:03.042127   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:03.042040   79098 retry.go:31] will retry after 712.049848ms: waiting for machine to come up
	I0729 18:27:03.755536   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:03.756010   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:03.756040   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:03.755988   79098 retry.go:31] will retry after 1.092318096s: waiting for machine to come up
	I0729 18:27:04.849745   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:04.850202   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:04.850226   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:04.850147   79098 retry.go:31] will retry after 903.54457ms: waiting for machine to come up
	I0729 18:27:05.754781   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:05.755193   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:05.755218   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:05.755157   79098 retry.go:31] will retry after 1.693512671s: waiting for machine to come up
	I0729 18:27:07.451087   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:07.451659   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:07.451688   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:07.451607   79098 retry.go:31] will retry after 1.734643072s: waiting for machine to come up
	I0729 18:27:09.188407   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:09.188963   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:09.188997   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:09.188900   79098 retry.go:31] will retry after 2.010973572s: waiting for machine to come up
	I0729 18:27:11.201171   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:11.201586   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:11.201620   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:11.201535   79098 retry.go:31] will retry after 3.178533437s: waiting for machine to come up
	I0729 18:27:14.381238   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:14.381648   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:14.381677   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:14.381609   79098 retry.go:31] will retry after 4.005160817s: waiting for machine to come up
	I0729 18:27:18.388470   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.388971   78080 main.go:141] libmachine: (old-k8s-version-386663) Found IP for machine: 192.168.50.70
	I0729 18:27:18.388989   78080 main.go:141] libmachine: (old-k8s-version-386663) Reserving static IP address...
	I0729 18:27:18.388999   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has current primary IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.389431   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "old-k8s-version-386663", mac: "52:54:00:78:b6:ac", ip: "192.168.50.70"} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.389459   78080 main.go:141] libmachine: (old-k8s-version-386663) Reserved static IP address: 192.168.50.70
	I0729 18:27:18.389477   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | skip adding static IP to network mk-old-k8s-version-386663 - found existing host DHCP lease matching {name: "old-k8s-version-386663", mac: "52:54:00:78:b6:ac", ip: "192.168.50.70"}
	I0729 18:27:18.389493   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Getting to WaitForSSH function...
	I0729 18:27:18.389515   78080 main.go:141] libmachine: (old-k8s-version-386663) Waiting for SSH to be available...
	I0729 18:27:18.391523   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.391916   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.391941   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.392062   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using SSH client type: external
	I0729 18:27:18.392088   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa (-rw-------)
	I0729 18:27:18.392119   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:27:18.392134   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | About to run SSH command:
	I0729 18:27:18.392150   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | exit 0
	I0729 18:27:18.514735   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | SSH cmd err, output: <nil>: 
	I0729 18:27:18.515114   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetConfigRaw
	I0729 18:27:18.515736   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:18.518194   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.518615   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.518651   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.518879   78080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json ...
	I0729 18:27:18.519090   78080 machine.go:94] provisionDockerMachine start ...
	I0729 18:27:18.519113   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:18.519322   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.521434   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.521824   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.521846   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.521996   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.522181   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.522349   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.522514   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.522724   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.522960   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.522975   78080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:27:18.622960   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:27:18.622989   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.623249   78080 buildroot.go:166] provisioning hostname "old-k8s-version-386663"
	I0729 18:27:18.623277   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.623461   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.626009   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.626376   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.626406   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.626649   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.626876   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.627141   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.627301   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.627474   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.627669   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.627683   78080 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386663 && echo "old-k8s-version-386663" | sudo tee /etc/hostname
	I0729 18:27:18.748137   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386663
	
	I0729 18:27:18.748165   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.751546   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.751882   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.751916   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.752086   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.752270   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.752409   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.752550   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.752747   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.753004   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.753031   78080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386663/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:27:18.863358   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:27:18.863389   78080 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:27:18.863415   78080 buildroot.go:174] setting up certificates
	I0729 18:27:18.863425   78080 provision.go:84] configureAuth start
	I0729 18:27:18.863436   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.863754   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:18.866285   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.866641   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.866668   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.866797   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.868886   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.869241   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.869270   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.869404   78080 provision.go:143] copyHostCerts
	I0729 18:27:18.869459   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:27:18.869468   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:27:18.869522   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:27:18.869614   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:27:18.869624   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:27:18.869652   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:27:18.869740   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:27:18.869750   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:27:18.869772   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:27:18.869833   78080 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386663 san=[127.0.0.1 192.168.50.70 localhost minikube old-k8s-version-386663]
	I0729 18:27:19.142743   78080 provision.go:177] copyRemoteCerts
	I0729 18:27:19.142808   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:27:19.142842   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.145484   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.145843   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.145872   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.146092   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.146334   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.146532   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.146692   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.230725   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:27:19.255862   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 18:27:19.290922   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:27:19.317519   78080 provision.go:87] duration metric: took 454.081583ms to configureAuth
	I0729 18:27:19.317549   78080 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:27:19.317766   78080 config.go:182] Loaded profile config "old-k8s-version-386663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:27:19.317854   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.320636   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.321074   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.321110   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.321346   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.321603   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.321782   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.321959   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.322158   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:19.322336   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:19.322351   78080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:27:19.626713   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:27:19.626737   78080 machine.go:97] duration metric: took 1.107631867s to provisionDockerMachine
	I0729 18:27:19.626749   78080 start.go:293] postStartSetup for "old-k8s-version-386663" (driver="kvm2")
	I0729 18:27:19.626763   78080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:27:19.626834   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.627168   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:27:19.627197   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.629389   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.629751   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.629782   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.629907   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.630102   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.630302   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.630460   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.709702   78080 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:27:19.713879   78080 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:27:19.713913   78080 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:27:19.713994   78080 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:27:19.714093   78080 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:27:19.714215   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:27:19.725226   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:19.751727   78080 start.go:296] duration metric: took 124.964072ms for postStartSetup
	I0729 18:27:19.751767   78080 fix.go:56] duration metric: took 19.951972224s for fixHost
	I0729 18:27:19.751796   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.754481   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.754843   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.754877   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.755107   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.755321   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.755482   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.755663   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.755829   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:19.756012   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:19.756024   78080 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 18:27:19.859279   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277639.831700968
	
	I0729 18:27:19.859302   78080 fix.go:216] guest clock: 1722277639.831700968
	I0729 18:27:19.859309   78080 fix.go:229] Guest: 2024-07-29 18:27:19.831700968 +0000 UTC Remote: 2024-07-29 18:27:19.751770935 +0000 UTC m=+272.565043390 (delta=79.930033ms)
	I0729 18:27:19.859327   78080 fix.go:200] guest clock delta is within tolerance: 79.930033ms
	I0729 18:27:19.859332   78080 start.go:83] releasing machines lock for "old-k8s-version-386663", held for 20.059569122s
	I0729 18:27:19.859353   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.859661   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:19.862741   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.863225   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.863261   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.863449   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864092   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864309   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864392   78080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:27:19.864432   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.864547   78080 ssh_runner.go:195] Run: cat /version.json
	I0729 18:27:19.864572   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.867636   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.867798   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868019   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.868044   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868178   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.868330   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.868356   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868360   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.868500   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.868587   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.868667   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.868754   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.868910   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.869046   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.947441   78080 ssh_runner.go:195] Run: systemctl --version
	I0729 18:27:19.967868   78080 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:20.114336   78080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:20.121716   78080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:20.121793   78080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:20.143272   78080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:20.143298   78080 start.go:495] detecting cgroup driver to use...
	I0729 18:27:20.143385   78080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:20.162433   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:20.178310   78080 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:20.178397   78080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:20.194091   78080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:20.209796   78080 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:20.341466   78080 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:20.514215   78080 docker.go:233] disabling docker service ...
	I0729 18:27:20.514338   78080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:20.531018   78080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:20.551839   78080 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:20.680430   78080 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:20.834782   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:20.852454   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:20.874962   78080 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 18:27:20.875017   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.886550   78080 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:20.886619   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.899344   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.914254   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.927308   78080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:20.939807   78080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:20.951648   78080 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:20.951738   78080 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:20.967918   78080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:20.979872   78080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:21.125398   78080 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:21.290736   78080 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:21.290816   78080 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:21.296922   78080 start.go:563] Will wait 60s for crictl version
	I0729 18:27:21.296987   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:21.302200   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:21.350783   78080 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:21.350919   78080 ssh_runner.go:195] Run: crio --version
	I0729 18:27:21.391539   78080 ssh_runner.go:195] Run: crio --version
	I0729 18:27:21.441225   78080 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 18:27:21.442583   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:21.446238   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:21.446728   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:21.446756   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:21.446988   78080 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:21.452537   78080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:21.470394   78080 kubeadm.go:883] updating cluster {Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:21.470555   78080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:27:21.470610   78080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:21.531670   78080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:27:21.531742   78080 ssh_runner.go:195] Run: which lz4
	I0729 18:27:21.536436   78080 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 18:27:21.542100   78080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:27:21.542139   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 18:27:23.329902   78080 crio.go:462] duration metric: took 1.793505279s to copy over tarball
	I0729 18:27:23.329979   78080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:27:26.453768   78080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.123735537s)
	I0729 18:27:26.453800   78080 crio.go:469] duration metric: took 3.123869338s to extract the tarball
	I0729 18:27:26.453809   78080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:27:26.501748   78080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:26.538093   78080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:27:26.538124   78080 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:27:26.538226   78080 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.538297   78080 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 18:27:26.538387   78080 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.538232   78080 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.538441   78080 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.538303   78080 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.538277   78080 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.538783   78080 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.540806   78080 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.540823   78080 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.540847   78080 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.540858   78080 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 18:27:26.540806   78080 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.540894   78080 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.540937   78080 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.540987   78080 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.700993   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.704402   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.712647   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.714034   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.715935   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.753888   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.758588   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 18:27:26.837981   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.844473   78080 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 18:27:26.844532   78080 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.844578   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.877082   78080 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 18:27:26.877134   78080 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.877183   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.889792   78080 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 18:27:26.889887   78080 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.889842   78080 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 18:27:26.889944   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.889983   78080 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.890034   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.916338   78080 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 18:27:26.916388   78080 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.916440   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.916437   78080 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 18:27:26.916540   78080 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.916581   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.942747   78080 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 18:27:26.942794   78080 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 18:27:26.942839   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:27.056976   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:27.056976   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 18:27:27.057045   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:27.057071   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:27.057101   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:27.057152   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:27.057178   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 18:27:27.219396   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 18:27:27.219541   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 18:27:27.223329   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 18:27:27.223406   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 18:27:27.223450   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 18:27:27.223492   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 18:27:27.223536   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 18:27:27.223567   78080 cache_images.go:92] duration metric: took 685.427642ms to LoadCachedImages
	W0729 18:27:27.223653   78080 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0729 18:27:27.223672   78080 kubeadm.go:934] updating node { 192.168.50.70 8443 v1.20.0 crio true true} ...
	I0729 18:27:27.223785   78080 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386663 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:27.223866   78080 ssh_runner.go:195] Run: crio config
	I0729 18:27:27.273186   78080 cni.go:84] Creating CNI manager for ""
	I0729 18:27:27.273207   78080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:27.273217   78080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:27.273241   78080 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.70 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386663 NodeName:old-k8s-version-386663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 18:27:27.273424   78080 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386663"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:27.273498   78080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 18:27:27.285247   78080 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:27.285327   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:27.295747   78080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 18:27:27.314192   78080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:27:27.331654   78080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 18:27:27.351717   78080 ssh_runner.go:195] Run: grep 192.168.50.70	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:27.356205   78080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:27.370446   78080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:27.509250   78080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:27.528776   78080 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663 for IP: 192.168.50.70
	I0729 18:27:27.528804   78080 certs.go:194] generating shared ca certs ...
	I0729 18:27:27.528823   78080 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:27.528991   78080 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:27.529045   78080 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:27.529061   78080 certs.go:256] generating profile certs ...
	I0729 18:27:27.529194   78080 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/client.key
	I0729 18:27:27.529308   78080 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key.71ea3f9f
	I0729 18:27:27.529364   78080 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key
	I0729 18:27:27.529529   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:27.529569   78080 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:27.529584   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:27.529614   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:27.529645   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:27.529689   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:27.529751   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:27.530573   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:27.582122   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:27.626846   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:27.663609   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:27.700294   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 18:27:27.746614   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:27.785212   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:27.834479   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:27:27.866939   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:27.892613   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:27.919059   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:27.947557   78080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:27.968625   78080 ssh_runner.go:195] Run: openssl version
	I0729 18:27:27.976500   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:27.991016   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:27.996228   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:27.996285   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:28.002529   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:28.013844   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:28.025388   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.029982   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.030042   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.036362   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:28.050134   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:28.062742   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.067240   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.067293   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.072973   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:28.084143   78080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:28.089526   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:28.096556   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:28.103044   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:28.109337   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:28.115455   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:28.121449   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:28.127395   78080 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:28.127504   78080 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:28.127581   78080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:28.176772   78080 cri.go:89] found id: ""
	I0729 18:27:28.176837   78080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:28.187955   78080 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:28.187979   78080 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:28.188034   78080 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:28.197926   78080 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:28.199364   78080 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-386663" does not appear in /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:27:28.200382   78080 kubeconfig.go:62] /home/jenkins/minikube-integration/19345-11206/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-386663" cluster setting kubeconfig missing "old-k8s-version-386663" context setting]
	I0729 18:27:28.201737   78080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:28.287712   78080 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:28.300675   78080 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.70
	I0729 18:27:28.300716   78080 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:28.300728   78080 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:28.300795   78080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:28.343880   78080 cri.go:89] found id: ""
	I0729 18:27:28.343962   78080 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:28.362391   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:28.372805   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:28.372830   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:28.372882   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:27:28.383540   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:28.383629   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:28.396564   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:27:28.409151   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:28.409208   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:28.422243   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:27:28.434736   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:28.434839   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:28.447681   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:27:28.460008   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:28.460073   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:28.472647   78080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:28.484179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:28.634526   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.206575   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.449626   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.550859   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.681945   78080 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:29.682015   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:30.182098   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:30.682977   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:31.182152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:31.682468   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:32.183031   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:32.682567   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:33.182100   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:33.682494   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.183075   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.683115   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:35.183094   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:35.683092   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:36.182173   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:36.682843   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:37.182324   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:37.682180   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:38.182453   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:38.682639   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:39.182874   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:39.682496   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.182727   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.683073   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:41.182060   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:41.682421   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:42.182813   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:42.682911   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:43.182279   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:43.682506   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.182109   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.682593   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:45.183002   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:45.682275   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:46.182491   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:46.683027   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:47.182311   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:47.682979   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.183024   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.682708   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:49.182427   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:49.682335   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:50.182146   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:50.682716   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:51.182231   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:51.683106   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:52.182739   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:52.682628   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:53.182081   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:53.682919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:54.183194   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:54.682506   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:55.182992   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:55.682152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:56.183083   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:56.682897   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.182789   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.682237   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.182211   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.682456   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:59.182669   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:59.682863   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:00.182261   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:00.682993   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:01.182832   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:01.682899   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:02.182765   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:02.682331   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.182154   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.682499   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:04.182355   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:04.682338   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:05.182107   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:05.683125   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:06.182481   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:06.683153   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:07.182992   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:07.682582   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.182094   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.682613   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:09.182936   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:09.682444   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:10.182354   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:10.682183   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:11.182502   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:11.682466   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:12.182113   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:12.682526   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.183014   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.682449   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:14.182138   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:14.683065   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:15.182838   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:15.682680   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:16.182714   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:16.682116   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:17.182842   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:17.683114   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:18.182919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:18.683103   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:19.182074   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:19.683031   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:20.182701   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:20.682749   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:21.182949   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:21.683001   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:22.182167   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:22.682723   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:23.182510   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:23.683084   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:24.182220   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:24.682699   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:25.182288   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:25.682433   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:26.182919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:26.682851   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:27.182225   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:27.682408   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:28.182187   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:28.683034   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:29.182922   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:29.682990   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:29.683063   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:29.730368   78080 cri.go:89] found id: ""
	I0729 18:28:29.730405   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.730413   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:29.730419   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:29.730473   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:29.770368   78080 cri.go:89] found id: ""
	I0729 18:28:29.770398   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.770409   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:29.770426   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:29.770479   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:29.809873   78080 cri.go:89] found id: ""
	I0729 18:28:29.809898   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.809906   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:29.809911   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:29.809970   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:29.848980   78080 cri.go:89] found id: ""
	I0729 18:28:29.849006   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.849016   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:29.849023   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:29.849082   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:29.887261   78080 cri.go:89] found id: ""
	I0729 18:28:29.887292   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.887302   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:29.887311   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:29.887388   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:29.927011   78080 cri.go:89] found id: ""
	I0729 18:28:29.927041   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.927051   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:29.927058   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:29.927122   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:29.965577   78080 cri.go:89] found id: ""
	I0729 18:28:29.965609   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.965619   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:29.965625   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:29.965693   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:29.999180   78080 cri.go:89] found id: ""
	I0729 18:28:29.999210   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.999222   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:29.999233   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:29.999253   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:30.049401   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:30.049433   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:30.063903   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:30.063939   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:30.194776   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:30.194797   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:30.194812   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:30.261861   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:30.261906   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:32.801821   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:32.814741   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:32.814815   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:32.853490   78080 cri.go:89] found id: ""
	I0729 18:28:32.853514   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.853522   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:32.853530   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:32.853580   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:32.890314   78080 cri.go:89] found id: ""
	I0729 18:28:32.890339   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.890349   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:32.890356   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:32.890435   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:32.928231   78080 cri.go:89] found id: ""
	I0729 18:28:32.928255   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.928262   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:32.928268   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:32.928314   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:32.964024   78080 cri.go:89] found id: ""
	I0729 18:28:32.964054   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.964065   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:32.964072   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:32.964136   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:33.002099   78080 cri.go:89] found id: ""
	I0729 18:28:33.002127   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.002140   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:33.002146   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:33.002195   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:33.042238   78080 cri.go:89] found id: ""
	I0729 18:28:33.042265   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.042273   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:33.042278   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:33.042331   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:33.078715   78080 cri.go:89] found id: ""
	I0729 18:28:33.078741   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.078750   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:33.078756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:33.078816   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:33.123304   78080 cri.go:89] found id: ""
	I0729 18:28:33.123334   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.123342   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:33.123351   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:33.123366   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:33.198950   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:33.198994   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:33.223566   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:33.223594   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:33.306500   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:33.306526   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:33.306541   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:33.379386   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:33.379421   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:35.926834   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:35.942218   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:35.942296   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:35.980115   78080 cri.go:89] found id: ""
	I0729 18:28:35.980142   78080 logs.go:276] 0 containers: []
	W0729 18:28:35.980153   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:35.980159   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:35.980221   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:36.015354   78080 cri.go:89] found id: ""
	I0729 18:28:36.015379   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.015387   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:36.015392   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:36.015456   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:36.056411   78080 cri.go:89] found id: ""
	I0729 18:28:36.056435   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.056445   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:36.056451   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:36.056499   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:36.099153   78080 cri.go:89] found id: ""
	I0729 18:28:36.099180   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.099188   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:36.099193   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:36.099241   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:36.133427   78080 cri.go:89] found id: ""
	I0729 18:28:36.133459   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.133470   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:36.133477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:36.133544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:36.168619   78080 cri.go:89] found id: ""
	I0729 18:28:36.168646   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.168657   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:36.168664   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:36.168723   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:36.203636   78080 cri.go:89] found id: ""
	I0729 18:28:36.203666   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.203676   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:36.203684   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:36.203747   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:36.246495   78080 cri.go:89] found id: ""
	I0729 18:28:36.246523   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.246533   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:36.246544   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:36.246561   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:36.260630   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:36.260656   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:36.337406   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:36.337424   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:36.337435   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:36.410016   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:36.410049   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:36.453458   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:36.453492   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:39.004147   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:39.018217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:39.018279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:39.054130   78080 cri.go:89] found id: ""
	I0729 18:28:39.054155   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.054166   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:39.054172   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:39.054219   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:39.090458   78080 cri.go:89] found id: ""
	I0729 18:28:39.090482   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.090490   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:39.090501   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:39.090548   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:39.126933   78080 cri.go:89] found id: ""
	I0729 18:28:39.126960   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.126971   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:39.126978   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:39.127042   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:39.162324   78080 cri.go:89] found id: ""
	I0729 18:28:39.162352   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.162381   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:39.162389   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:39.162450   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:39.202440   78080 cri.go:89] found id: ""
	I0729 18:28:39.202464   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.202471   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:39.202477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:39.202537   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:39.238314   78080 cri.go:89] found id: ""
	I0729 18:28:39.238342   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.238352   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:39.238368   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:39.238436   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:39.275545   78080 cri.go:89] found id: ""
	I0729 18:28:39.275584   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.275592   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:39.275598   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:39.275663   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:39.311575   78080 cri.go:89] found id: ""
	I0729 18:28:39.311603   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.311614   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:39.311624   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:39.311643   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:39.367667   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:39.367711   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:39.381823   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:39.381852   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:39.456060   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:39.456083   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:39.456100   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:39.531747   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:39.531784   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:42.077771   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:42.092424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:42.092512   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:42.128710   78080 cri.go:89] found id: ""
	I0729 18:28:42.128744   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.128756   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:42.128765   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:42.128834   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:42.166092   78080 cri.go:89] found id: ""
	I0729 18:28:42.166126   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.166133   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:42.166138   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:42.166186   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:42.200955   78080 cri.go:89] found id: ""
	I0729 18:28:42.200981   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.200989   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:42.200994   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:42.201053   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:42.240176   78080 cri.go:89] found id: ""
	I0729 18:28:42.240203   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.240212   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:42.240219   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:42.240279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:42.279844   78080 cri.go:89] found id: ""
	I0729 18:28:42.279872   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.279880   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:42.279885   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:42.279946   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:42.313071   78080 cri.go:89] found id: ""
	I0729 18:28:42.313099   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.313108   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:42.313114   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:42.313187   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:42.348540   78080 cri.go:89] found id: ""
	I0729 18:28:42.348566   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.348573   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:42.348580   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:42.348630   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:42.384688   78080 cri.go:89] found id: ""
	I0729 18:28:42.384714   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.384725   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:42.384736   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:42.384750   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:42.399178   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:42.399206   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:42.472903   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:42.472921   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:42.472937   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:42.558541   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:42.558573   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:42.599403   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:42.599432   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:45.154026   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:45.167130   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:45.167200   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:45.203627   78080 cri.go:89] found id: ""
	I0729 18:28:45.203654   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.203663   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:45.203668   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:45.203714   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:45.242293   78080 cri.go:89] found id: ""
	I0729 18:28:45.242316   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.242325   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:45.242332   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:45.242403   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:45.282253   78080 cri.go:89] found id: ""
	I0729 18:28:45.282275   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.282282   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:45.282288   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:45.282335   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:45.320151   78080 cri.go:89] found id: ""
	I0729 18:28:45.320175   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.320183   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:45.320189   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:45.320250   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:45.356210   78080 cri.go:89] found id: ""
	I0729 18:28:45.356236   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.356247   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:45.356254   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:45.356316   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:45.393083   78080 cri.go:89] found id: ""
	I0729 18:28:45.393116   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.393131   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:45.393139   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:45.393199   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:45.430235   78080 cri.go:89] found id: ""
	I0729 18:28:45.430263   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.430274   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:45.430282   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:45.430346   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:45.463068   78080 cri.go:89] found id: ""
	I0729 18:28:45.463132   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.463143   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:45.463155   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:45.463203   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:45.541411   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:45.541441   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:45.581967   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:45.582001   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:45.639427   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:45.639459   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:45.655715   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:45.655741   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:45.725820   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:48.226252   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:48.240419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:48.240494   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:48.271506   78080 cri.go:89] found id: ""
	I0729 18:28:48.271538   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.271550   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:48.271557   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:48.271615   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:48.305163   78080 cri.go:89] found id: ""
	I0729 18:28:48.305186   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.305198   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:48.305203   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:48.305252   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:48.336453   78080 cri.go:89] found id: ""
	I0729 18:28:48.336480   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.336492   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:48.336500   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:48.336557   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:48.368690   78080 cri.go:89] found id: ""
	I0729 18:28:48.368713   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.368720   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:48.368725   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:48.368784   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:48.401723   78080 cri.go:89] found id: ""
	I0729 18:28:48.401746   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.401753   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:48.401758   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:48.401822   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:48.439876   78080 cri.go:89] found id: ""
	I0729 18:28:48.439896   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.439903   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:48.439908   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:48.439956   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:48.473352   78080 cri.go:89] found id: ""
	I0729 18:28:48.473383   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.473394   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:48.473401   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:48.473461   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:48.506752   78080 cri.go:89] found id: ""
	I0729 18:28:48.506779   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.506788   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:48.506799   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:48.506815   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:48.547513   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:48.547535   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:48.599704   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:48.599733   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:48.613577   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:48.613604   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:48.681272   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:48.681290   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:48.681301   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:51.267397   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:51.280243   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:51.280317   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:51.314047   78080 cri.go:89] found id: ""
	I0729 18:28:51.314078   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.314090   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:51.314097   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:51.314162   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:51.346048   78080 cri.go:89] found id: ""
	I0729 18:28:51.346073   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.346080   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:51.346085   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:51.346144   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:51.380511   78080 cri.go:89] found id: ""
	I0729 18:28:51.380543   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.380553   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:51.380561   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:51.380637   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:51.415189   78080 cri.go:89] found id: ""
	I0729 18:28:51.415213   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.415220   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:51.415227   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:51.415310   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:51.454324   78080 cri.go:89] found id: ""
	I0729 18:28:51.454351   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.454380   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:51.454388   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:51.454449   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:51.488737   78080 cri.go:89] found id: ""
	I0729 18:28:51.488768   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.488779   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:51.488787   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:51.488848   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:51.528869   78080 cri.go:89] found id: ""
	I0729 18:28:51.528903   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.528912   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:51.528920   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:51.528972   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:51.566039   78080 cri.go:89] found id: ""
	I0729 18:28:51.566067   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.566075   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:51.566086   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:51.566102   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:51.604746   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:51.604774   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:51.661048   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:51.661089   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:51.675420   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:51.675447   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:51.754496   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:51.754531   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:51.754548   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:54.335796   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:54.350726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:54.350784   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:54.389661   78080 cri.go:89] found id: ""
	I0729 18:28:54.389683   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.389694   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:54.389701   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:54.389761   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:54.427073   78080 cri.go:89] found id: ""
	I0729 18:28:54.427100   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.427110   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:54.427117   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:54.427178   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:54.466761   78080 cri.go:89] found id: ""
	I0729 18:28:54.466793   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.466802   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:54.466808   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:54.466871   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:54.501115   78080 cri.go:89] found id: ""
	I0729 18:28:54.501144   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.501159   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:54.501167   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:54.501229   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:54.535430   78080 cri.go:89] found id: ""
	I0729 18:28:54.535461   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.535472   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:54.535480   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:54.535543   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:54.574994   78080 cri.go:89] found id: ""
	I0729 18:28:54.575024   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.575034   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:54.575041   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:54.575107   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:54.608770   78080 cri.go:89] found id: ""
	I0729 18:28:54.608792   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.608800   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:54.608805   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:54.608850   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:54.648026   78080 cri.go:89] found id: ""
	I0729 18:28:54.648050   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.648057   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:54.648066   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:54.648077   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:54.728445   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:54.728485   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:54.774752   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:54.774781   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:54.826549   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:54.826582   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:54.840366   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:54.840394   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:54.907422   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:57.408469   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:57.421855   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:57.421923   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:57.457794   78080 cri.go:89] found id: ""
	I0729 18:28:57.457816   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.457824   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:57.457829   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:57.457908   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:57.492851   78080 cri.go:89] found id: ""
	I0729 18:28:57.492880   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.492888   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:57.492894   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:57.492946   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:57.528221   78080 cri.go:89] found id: ""
	I0729 18:28:57.528249   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.528258   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:57.528265   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:57.528330   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:57.565504   78080 cri.go:89] found id: ""
	I0729 18:28:57.565536   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.565547   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:57.565554   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:57.565618   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:57.599391   78080 cri.go:89] found id: ""
	I0729 18:28:57.599418   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.599426   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:57.599432   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:57.599491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:57.643757   78080 cri.go:89] found id: ""
	I0729 18:28:57.643784   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.643798   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:57.643806   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:57.643867   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:57.680825   78080 cri.go:89] found id: ""
	I0729 18:28:57.680853   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.680864   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:57.680871   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:57.680936   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:57.714450   78080 cri.go:89] found id: ""
	I0729 18:28:57.714479   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.714490   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:57.714500   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:57.714516   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:57.798411   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:57.798437   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:57.798453   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:57.878210   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:57.878246   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:57.917476   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:57.917505   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:57.971395   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:57.971432   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:00.486419   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:00.500625   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:00.500703   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:00.539625   78080 cri.go:89] found id: ""
	I0729 18:29:00.539650   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.539659   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:00.539682   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:00.539737   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:00.577252   78080 cri.go:89] found id: ""
	I0729 18:29:00.577284   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.577297   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:00.577303   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:00.577350   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:00.611850   78080 cri.go:89] found id: ""
	I0729 18:29:00.611878   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.611886   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:00.611892   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:00.611939   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:00.648964   78080 cri.go:89] found id: ""
	I0729 18:29:00.648989   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.648996   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:00.649003   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:00.649062   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:00.686124   78080 cri.go:89] found id: ""
	I0729 18:29:00.686147   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.686156   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:00.686161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:00.686217   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:00.721166   78080 cri.go:89] found id: ""
	I0729 18:29:00.721195   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.721205   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:00.721213   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:00.721276   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:00.758394   78080 cri.go:89] found id: ""
	I0729 18:29:00.758423   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.758431   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:00.758436   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:00.758491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:00.793487   78080 cri.go:89] found id: ""
	I0729 18:29:00.793514   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.793523   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:00.793533   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:00.793549   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:00.807069   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:00.807106   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:00.880611   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:00.880629   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:00.880641   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:00.963534   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:00.963568   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:01.004145   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:01.004174   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:03.560985   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:03.574407   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:03.574476   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:03.608027   78080 cri.go:89] found id: ""
	I0729 18:29:03.608048   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.608057   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:03.608062   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:03.608119   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:03.644777   78080 cri.go:89] found id: ""
	I0729 18:29:03.644804   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.644814   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:03.644821   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:03.644895   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:03.684050   78080 cri.go:89] found id: ""
	I0729 18:29:03.684074   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.684082   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:03.684089   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:03.684149   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:03.724350   78080 cri.go:89] found id: ""
	I0729 18:29:03.724376   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.724383   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:03.724390   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:03.724439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:03.766859   78080 cri.go:89] found id: ""
	I0729 18:29:03.766887   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.766898   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:03.766905   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:03.766967   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:03.800535   78080 cri.go:89] found id: ""
	I0729 18:29:03.800562   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.800572   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:03.800579   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:03.800639   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:03.834991   78080 cri.go:89] found id: ""
	I0729 18:29:03.835011   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.835019   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:03.835024   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:03.835073   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:03.869159   78080 cri.go:89] found id: ""
	I0729 18:29:03.869191   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.869201   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:03.869211   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:03.869226   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:03.940451   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:03.940469   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:03.940487   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:04.020880   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:04.020910   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:04.064707   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:04.064728   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:04.121551   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:04.121587   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:06.636983   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:06.651500   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:06.651582   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:06.686556   78080 cri.go:89] found id: ""
	I0729 18:29:06.686582   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.686592   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:06.686599   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:06.686660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:06.721967   78080 cri.go:89] found id: ""
	I0729 18:29:06.721996   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.722008   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:06.722016   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:06.722115   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:06.760409   78080 cri.go:89] found id: ""
	I0729 18:29:06.760433   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.760440   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:06.760445   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:06.760499   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:06.794050   78080 cri.go:89] found id: ""
	I0729 18:29:06.794074   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.794081   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:06.794087   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:06.794143   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:06.826445   78080 cri.go:89] found id: ""
	I0729 18:29:06.826471   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.826478   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:06.826484   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:06.826544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:06.860680   78080 cri.go:89] found id: ""
	I0729 18:29:06.860700   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.860706   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:06.860712   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:06.860761   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:06.898192   78080 cri.go:89] found id: ""
	I0729 18:29:06.898215   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.898223   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:06.898229   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:06.898284   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:06.931892   78080 cri.go:89] found id: ""
	I0729 18:29:06.931920   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.931930   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:06.931940   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:06.931955   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:06.987265   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:06.987294   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:07.043520   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:07.043547   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:07.056995   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:07.057019   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:07.124932   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:07.124956   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:07.124971   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:09.708947   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:09.723497   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:09.723565   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:09.762686   78080 cri.go:89] found id: ""
	I0729 18:29:09.762714   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.762725   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:09.762733   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:09.762797   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:09.799674   78080 cri.go:89] found id: ""
	I0729 18:29:09.799699   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.799708   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:09.799715   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:09.799775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:09.836121   78080 cri.go:89] found id: ""
	I0729 18:29:09.836147   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.836156   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:09.836161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:09.836209   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:09.872758   78080 cri.go:89] found id: ""
	I0729 18:29:09.872783   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.872791   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:09.872797   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:09.872842   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:09.911681   78080 cri.go:89] found id: ""
	I0729 18:29:09.911711   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.911719   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:09.911724   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:09.911773   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:09.951531   78080 cri.go:89] found id: ""
	I0729 18:29:09.951554   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.951561   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:09.951567   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:09.951624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:09.985568   78080 cri.go:89] found id: ""
	I0729 18:29:09.985597   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.985606   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:09.985612   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:09.985661   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:10.020369   78080 cri.go:89] found id: ""
	I0729 18:29:10.020394   78080 logs.go:276] 0 containers: []
	W0729 18:29:10.020402   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:10.020409   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:10.020421   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:10.076538   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:10.076574   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:10.090954   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:10.090980   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:10.165843   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:10.165875   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:10.165890   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:10.242438   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:10.242469   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:12.781369   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:12.797066   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:12.797160   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:12.832500   78080 cri.go:89] found id: ""
	I0729 18:29:12.832528   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.832545   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:12.832552   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:12.832615   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:12.866390   78080 cri.go:89] found id: ""
	I0729 18:29:12.866420   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.866428   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:12.866434   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:12.866494   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:12.901616   78080 cri.go:89] found id: ""
	I0729 18:29:12.901636   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.901644   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:12.901649   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:12.901713   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:12.935954   78080 cri.go:89] found id: ""
	I0729 18:29:12.935976   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.935985   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:12.935993   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:12.936053   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:12.970570   78080 cri.go:89] found id: ""
	I0729 18:29:12.970623   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.970637   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:12.970645   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:12.970702   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:13.008629   78080 cri.go:89] found id: ""
	I0729 18:29:13.008658   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.008666   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:13.008672   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:13.008725   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:13.045689   78080 cri.go:89] found id: ""
	I0729 18:29:13.045713   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.045721   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:13.045726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:13.045773   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:13.084707   78080 cri.go:89] found id: ""
	I0729 18:29:13.084735   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.084745   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:13.084756   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:13.084774   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:13.161884   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:13.161920   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:13.205377   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:13.205410   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:13.258161   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:13.258189   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:13.272208   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:13.272240   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:13.347519   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:15.848068   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:15.861773   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:15.861851   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:15.902421   78080 cri.go:89] found id: ""
	I0729 18:29:15.902449   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.902458   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:15.902466   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:15.902532   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:15.939552   78080 cri.go:89] found id: ""
	I0729 18:29:15.939576   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.939583   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:15.939588   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:15.939645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:15.974424   78080 cri.go:89] found id: ""
	I0729 18:29:15.974454   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.974463   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:15.974468   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:15.974516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:16.010955   78080 cri.go:89] found id: ""
	I0729 18:29:16.010993   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.011000   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:16.011006   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:16.011062   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:16.046785   78080 cri.go:89] found id: ""
	I0729 18:29:16.046815   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.046825   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:16.046832   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:16.046887   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:16.082691   78080 cri.go:89] found id: ""
	I0729 18:29:16.082721   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.082731   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:16.082739   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:16.082796   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:16.127633   78080 cri.go:89] found id: ""
	I0729 18:29:16.127663   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.127676   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:16.127684   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:16.127741   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:16.162641   78080 cri.go:89] found id: ""
	I0729 18:29:16.162662   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.162670   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:16.162684   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:16.162695   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:16.215132   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:16.215162   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:16.229581   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:16.229607   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:16.303178   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:16.303198   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:16.303212   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:16.383739   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:16.383775   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:18.924292   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:18.937571   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:18.937626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:18.970523   78080 cri.go:89] found id: ""
	I0729 18:29:18.970554   78080 logs.go:276] 0 containers: []
	W0729 18:29:18.970563   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:18.970568   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:18.970624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:19.005448   78080 cri.go:89] found id: ""
	I0729 18:29:19.005471   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.005478   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:19.005483   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:19.005538   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:19.044352   78080 cri.go:89] found id: ""
	I0729 18:29:19.044377   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.044386   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:19.044393   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:19.044448   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:19.079288   78080 cri.go:89] found id: ""
	I0729 18:29:19.079317   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.079327   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:19.079333   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:19.079402   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:19.122932   78080 cri.go:89] found id: ""
	I0729 18:29:19.122954   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.122961   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:19.122967   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:19.123020   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:19.166992   78080 cri.go:89] found id: ""
	I0729 18:29:19.167018   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.167025   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:19.167031   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:19.167103   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:19.215301   78080 cri.go:89] found id: ""
	I0729 18:29:19.215331   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.215341   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:19.215355   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:19.215419   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:19.267635   78080 cri.go:89] found id: ""
	I0729 18:29:19.267657   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.267664   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:19.267671   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:19.267682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:19.319924   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:19.319962   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:19.333987   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:19.334010   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:19.406541   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:19.406558   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:19.406571   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:19.487388   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:19.487426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:22.027745   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:22.041145   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:22.041218   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:22.080000   78080 cri.go:89] found id: ""
	I0729 18:29:22.080022   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.080029   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:22.080034   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:22.080079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:22.116385   78080 cri.go:89] found id: ""
	I0729 18:29:22.116415   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.116425   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:22.116431   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:22.116492   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:22.150530   78080 cri.go:89] found id: ""
	I0729 18:29:22.150552   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.150559   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:22.150565   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:22.150621   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:22.188782   78080 cri.go:89] found id: ""
	I0729 18:29:22.188808   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.188817   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:22.188822   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:22.188873   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:22.227117   78080 cri.go:89] found id: ""
	I0729 18:29:22.227152   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.227162   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:22.227169   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:22.227234   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:22.263057   78080 cri.go:89] found id: ""
	I0729 18:29:22.263079   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.263086   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:22.263091   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:22.263145   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:22.297368   78080 cri.go:89] found id: ""
	I0729 18:29:22.297391   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.297399   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:22.297406   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:22.297466   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:22.334117   78080 cri.go:89] found id: ""
	I0729 18:29:22.334149   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.334159   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:22.334170   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:22.334184   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:22.349344   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:22.349369   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:22.415720   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:22.415743   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:22.415758   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:22.494937   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:22.494971   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:22.536352   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:22.536382   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:25.087795   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:25.103985   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:25.104050   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:25.158532   78080 cri.go:89] found id: ""
	I0729 18:29:25.158562   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.158572   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:25.158580   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:25.158641   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:25.216740   78080 cri.go:89] found id: ""
	I0729 18:29:25.216762   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.216769   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:25.216775   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:25.216827   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:25.254827   78080 cri.go:89] found id: ""
	I0729 18:29:25.254855   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.254865   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:25.254872   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:25.254934   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:25.289377   78080 cri.go:89] found id: ""
	I0729 18:29:25.289407   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.289417   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:25.289424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:25.289484   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:25.328111   78080 cri.go:89] found id: ""
	I0729 18:29:25.328144   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.328153   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:25.328161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:25.328224   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:25.364779   78080 cri.go:89] found id: ""
	I0729 18:29:25.364808   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.364815   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:25.364827   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:25.364874   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:25.402906   78080 cri.go:89] found id: ""
	I0729 18:29:25.402935   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.402942   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:25.402948   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:25.403007   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:25.438747   78080 cri.go:89] found id: ""
	I0729 18:29:25.438770   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.438778   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:25.438787   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:25.438803   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:25.452803   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:25.452829   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:25.527575   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:25.527593   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:25.527610   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:25.622437   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:25.622482   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:25.661451   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:25.661478   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:28.213898   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:28.230013   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:28.230071   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:28.265484   78080 cri.go:89] found id: ""
	I0729 18:29:28.265511   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.265521   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:28.265530   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:28.265594   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:28.306374   78080 cri.go:89] found id: ""
	I0729 18:29:28.306428   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.306441   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:28.306448   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:28.306501   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:28.340274   78080 cri.go:89] found id: ""
	I0729 18:29:28.340299   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.340309   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:28.340316   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:28.340379   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:28.373928   78080 cri.go:89] found id: ""
	I0729 18:29:28.373973   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.373982   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:28.373990   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:28.374052   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:28.407075   78080 cri.go:89] found id: ""
	I0729 18:29:28.407107   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.407120   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:28.407129   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:28.407215   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:28.444501   78080 cri.go:89] found id: ""
	I0729 18:29:28.444528   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.444536   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:28.444543   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:28.444614   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:28.487513   78080 cri.go:89] found id: ""
	I0729 18:29:28.487540   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.487548   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:28.487554   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:28.487611   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:28.521957   78080 cri.go:89] found id: ""
	I0729 18:29:28.521990   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.522000   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:28.522011   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:28.522027   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:28.536880   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:28.536918   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:28.609486   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:28.609513   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:28.609528   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:28.694086   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:28.694125   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:28.733930   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:28.733964   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:31.292260   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:31.305840   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:31.305899   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:31.342510   78080 cri.go:89] found id: ""
	I0729 18:29:31.342539   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.342550   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:31.342557   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:31.342613   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:31.375093   78080 cri.go:89] found id: ""
	I0729 18:29:31.375118   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.375128   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:31.375135   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:31.375198   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:31.408554   78080 cri.go:89] found id: ""
	I0729 18:29:31.408576   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.408583   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:31.408588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:31.408660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:31.448748   78080 cri.go:89] found id: ""
	I0729 18:29:31.448774   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.448783   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:31.448796   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:31.448855   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:31.483541   78080 cri.go:89] found id: ""
	I0729 18:29:31.483564   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.483572   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:31.483578   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:31.483637   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:31.518173   78080 cri.go:89] found id: ""
	I0729 18:29:31.518198   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.518209   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:31.518217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:31.518279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:31.553345   78080 cri.go:89] found id: ""
	I0729 18:29:31.553371   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.553379   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:31.553384   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:31.553439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:31.591857   78080 cri.go:89] found id: ""
	I0729 18:29:31.591887   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.591905   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:31.591916   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:31.591929   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:31.648404   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:31.648436   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:31.661455   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:31.661477   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:31.732978   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:31.732997   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:31.733009   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:31.812105   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:31.812145   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:34.353079   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:34.366759   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:34.366817   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:34.400944   78080 cri.go:89] found id: ""
	I0729 18:29:34.400974   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.400984   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:34.400991   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:34.401055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:34.439348   78080 cri.go:89] found id: ""
	I0729 18:29:34.439373   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.439383   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:34.439395   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:34.439444   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:34.473969   78080 cri.go:89] found id: ""
	I0729 18:29:34.473991   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.474010   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:34.474017   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:34.474080   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:34.507741   78080 cri.go:89] found id: ""
	I0729 18:29:34.507770   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.507778   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:34.507784   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:34.507845   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:34.543794   78080 cri.go:89] found id: ""
	I0729 18:29:34.543815   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.543823   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:34.543830   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:34.543895   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:34.577893   78080 cri.go:89] found id: ""
	I0729 18:29:34.577918   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.577926   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:34.577931   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:34.577978   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:34.612703   78080 cri.go:89] found id: ""
	I0729 18:29:34.612735   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.612745   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:34.612752   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:34.612815   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:34.648167   78080 cri.go:89] found id: ""
	I0729 18:29:34.648197   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.648209   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:34.648219   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:34.648233   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:34.689821   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:34.689848   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:34.743902   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:34.743935   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:34.757400   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:34.757426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:34.833684   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:34.833706   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:34.833721   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:37.419270   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:37.433249   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:37.433301   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:37.469991   78080 cri.go:89] found id: ""
	I0729 18:29:37.470021   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.470031   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:37.470038   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:37.470098   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:37.504511   78080 cri.go:89] found id: ""
	I0729 18:29:37.504537   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.504548   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:37.504554   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:37.504612   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:37.545304   78080 cri.go:89] found id: ""
	I0729 18:29:37.545332   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.545342   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:37.545349   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:37.545406   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:37.584255   78080 cri.go:89] found id: ""
	I0729 18:29:37.584280   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.584287   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:37.584292   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:37.584345   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:37.620917   78080 cri.go:89] found id: ""
	I0729 18:29:37.620943   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.620951   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:37.620958   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:37.621022   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:37.659381   78080 cri.go:89] found id: ""
	I0729 18:29:37.659405   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.659414   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:37.659419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:37.659486   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:37.701337   78080 cri.go:89] found id: ""
	I0729 18:29:37.701360   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.701368   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:37.701373   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:37.701426   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:37.737142   78080 cri.go:89] found id: ""
	I0729 18:29:37.737168   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.737177   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:37.737186   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:37.737201   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:37.789951   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:37.789992   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:37.804759   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:37.804784   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:37.881777   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:37.881794   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:37.881808   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:37.970593   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:37.970625   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:40.511557   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:40.525472   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:40.525527   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:40.564227   78080 cri.go:89] found id: ""
	I0729 18:29:40.564253   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.564263   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:40.564270   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:40.564336   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:40.600384   78080 cri.go:89] found id: ""
	I0729 18:29:40.600409   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.600417   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:40.600423   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:40.600475   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:40.634819   78080 cri.go:89] found id: ""
	I0729 18:29:40.634843   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.634858   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:40.634866   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:40.634913   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:40.669963   78080 cri.go:89] found id: ""
	I0729 18:29:40.669991   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.669999   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:40.670006   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:40.670069   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:40.705680   78080 cri.go:89] found id: ""
	I0729 18:29:40.705705   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.705714   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:40.705719   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:40.705775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:40.743691   78080 cri.go:89] found id: ""
	I0729 18:29:40.743715   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.743725   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:40.743732   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:40.743820   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:40.783858   78080 cri.go:89] found id: ""
	I0729 18:29:40.783889   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.783898   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:40.783903   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:40.783953   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:40.821499   78080 cri.go:89] found id: ""
	I0729 18:29:40.821527   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.821537   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:40.821547   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:40.821562   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:40.874941   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:40.874972   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:40.888034   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:40.888057   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:40.960013   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:40.960032   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:40.960044   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:41.043013   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:41.043042   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:43.583555   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:43.597120   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:43.597193   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:43.631500   78080 cri.go:89] found id: ""
	I0729 18:29:43.631526   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.631535   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:43.631542   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:43.631607   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:43.667003   78080 cri.go:89] found id: ""
	I0729 18:29:43.667029   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.667037   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:43.667042   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:43.667102   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:43.701471   78080 cri.go:89] found id: ""
	I0729 18:29:43.701502   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.701510   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:43.701515   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:43.701569   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:43.740037   78080 cri.go:89] found id: ""
	I0729 18:29:43.740058   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.740067   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:43.740074   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:43.740145   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:43.772584   78080 cri.go:89] found id: ""
	I0729 18:29:43.772610   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.772620   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:43.772626   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:43.772689   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:43.806340   78080 cri.go:89] found id: ""
	I0729 18:29:43.806382   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.806393   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:43.806401   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:43.806480   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:43.840085   78080 cri.go:89] found id: ""
	I0729 18:29:43.840109   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.840118   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:43.840133   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:43.840198   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:43.873412   78080 cri.go:89] found id: ""
	I0729 18:29:43.873438   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.873448   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:43.873458   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:43.873473   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:43.928762   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:43.928790   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:43.944129   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:43.944156   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:44.017330   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:44.017349   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:44.017361   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:44.106858   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:44.106915   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:46.651050   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:46.665253   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:46.665310   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:46.698846   78080 cri.go:89] found id: ""
	I0729 18:29:46.698871   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.698881   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:46.698888   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:46.698956   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:46.734354   78080 cri.go:89] found id: ""
	I0729 18:29:46.734395   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.734405   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:46.734413   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:46.734468   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:46.771978   78080 cri.go:89] found id: ""
	I0729 18:29:46.771999   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.772007   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:46.772012   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:46.772059   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:46.807231   78080 cri.go:89] found id: ""
	I0729 18:29:46.807255   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.807263   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:46.807272   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:46.807329   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:46.842257   78080 cri.go:89] found id: ""
	I0729 18:29:46.842278   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.842306   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:46.842312   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:46.842373   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:46.876287   78080 cri.go:89] found id: ""
	I0729 18:29:46.876309   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.876317   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:46.876323   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:46.876389   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:46.909695   78080 cri.go:89] found id: ""
	I0729 18:29:46.909719   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.909726   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:46.909731   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:46.909806   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:46.951768   78080 cri.go:89] found id: ""
	I0729 18:29:46.951798   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.951807   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:46.951815   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:46.951825   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:47.025467   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:47.025485   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:47.025497   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:47.106336   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:47.106391   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:47.145652   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:47.145682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:47.200857   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:47.200886   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:49.715401   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:49.729703   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:49.729776   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:49.770016   78080 cri.go:89] found id: ""
	I0729 18:29:49.770039   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.770062   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:49.770070   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:49.770127   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:49.805464   78080 cri.go:89] found id: ""
	I0729 18:29:49.805487   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.805495   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:49.805500   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:49.805560   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:49.838739   78080 cri.go:89] found id: ""
	I0729 18:29:49.838770   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.838782   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:49.838789   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:49.838861   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:49.881168   78080 cri.go:89] found id: ""
	I0729 18:29:49.881194   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.881202   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:49.881208   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:49.881269   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:49.919978   78080 cri.go:89] found id: ""
	I0729 18:29:49.919999   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.920006   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:49.920012   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:49.920079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:49.958971   78080 cri.go:89] found id: ""
	I0729 18:29:49.958996   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.959006   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:49.959013   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:49.959063   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:50.001253   78080 cri.go:89] found id: ""
	I0729 18:29:50.001281   78080 logs.go:276] 0 containers: []
	W0729 18:29:50.001291   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:50.001298   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:50.001362   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:50.038729   78080 cri.go:89] found id: ""
	I0729 18:29:50.038755   78080 logs.go:276] 0 containers: []
	W0729 18:29:50.038766   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:50.038776   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:50.038789   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:50.082540   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:50.082567   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:50.132372   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:50.132413   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:50.146806   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:50.146835   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:50.214495   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:50.214515   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:50.214532   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:52.793987   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:52.808085   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:52.808149   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:52.844869   78080 cri.go:89] found id: ""
	I0729 18:29:52.844904   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.844917   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:52.844925   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:52.844986   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:52.878097   78080 cri.go:89] found id: ""
	I0729 18:29:52.878122   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.878135   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:52.878142   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:52.878191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:52.910843   78080 cri.go:89] found id: ""
	I0729 18:29:52.910884   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.910894   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:52.910902   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:52.910953   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:52.943233   78080 cri.go:89] found id: ""
	I0729 18:29:52.943257   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.943267   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:52.943274   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:52.943335   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:52.978354   78080 cri.go:89] found id: ""
	I0729 18:29:52.978402   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.978413   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:52.978423   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:52.978503   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:53.011238   78080 cri.go:89] found id: ""
	I0729 18:29:53.011266   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.011276   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:53.011283   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:53.011336   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:53.048787   78080 cri.go:89] found id: ""
	I0729 18:29:53.048817   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.048827   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:53.048834   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:53.048900   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:53.086108   78080 cri.go:89] found id: ""
	I0729 18:29:53.086135   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.086156   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:53.086176   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:53.086195   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:53.137552   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:53.137580   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:53.151308   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:53.151333   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:53.225968   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:53.225992   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:53.226004   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:53.308111   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:53.308145   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:55.850207   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:55.864003   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:55.864054   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:55.898109   78080 cri.go:89] found id: ""
	I0729 18:29:55.898134   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.898142   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:55.898148   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:55.898201   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:55.931616   78080 cri.go:89] found id: ""
	I0729 18:29:55.931643   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.931653   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:55.931660   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:55.931719   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:55.969034   78080 cri.go:89] found id: ""
	I0729 18:29:55.969063   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.969073   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:55.969080   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:55.969142   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:56.007552   78080 cri.go:89] found id: ""
	I0729 18:29:56.007576   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.007586   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:56.007592   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:56.007653   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:56.044342   78080 cri.go:89] found id: ""
	I0729 18:29:56.044367   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.044376   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:56.044382   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:56.044437   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:56.078352   78080 cri.go:89] found id: ""
	I0729 18:29:56.078396   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.078412   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:56.078420   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:56.078471   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:56.116505   78080 cri.go:89] found id: ""
	I0729 18:29:56.116532   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.116543   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:56.116551   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:56.116611   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:56.151493   78080 cri.go:89] found id: ""
	I0729 18:29:56.151516   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.151523   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:56.151530   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:56.151542   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:56.206170   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:56.206198   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:56.219658   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:56.219684   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:56.290279   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:56.290300   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:56.290312   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:56.371352   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:56.371382   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:58.908793   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:58.922566   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:58.922626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:58.959375   78080 cri.go:89] found id: ""
	I0729 18:29:58.959397   78080 logs.go:276] 0 containers: []
	W0729 18:29:58.959404   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:58.959410   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:58.959459   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:58.993235   78080 cri.go:89] found id: ""
	I0729 18:29:58.993257   78080 logs.go:276] 0 containers: []
	W0729 18:29:58.993265   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:58.993271   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:58.993331   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:59.028186   78080 cri.go:89] found id: ""
	I0729 18:29:59.028212   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.028220   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:59.028225   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:59.028271   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:59.063589   78080 cri.go:89] found id: ""
	I0729 18:29:59.063619   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.063628   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:59.063635   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:59.063695   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:59.101116   78080 cri.go:89] found id: ""
	I0729 18:29:59.101142   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.101152   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:59.101158   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:59.101208   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:59.135288   78080 cri.go:89] found id: ""
	I0729 18:29:59.135314   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.135324   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:59.135332   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:59.135395   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:59.170520   78080 cri.go:89] found id: ""
	I0729 18:29:59.170549   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.170557   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:59.170562   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:59.170618   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:59.229796   78080 cri.go:89] found id: ""
	I0729 18:29:59.229825   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.229835   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:59.229843   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:59.229871   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:59.244654   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:59.244682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:59.321262   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:59.321286   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:59.321301   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:59.401423   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:59.401459   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:59.442916   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:59.442938   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:01.995116   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:02.008454   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:02.008516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:02.046412   78080 cri.go:89] found id: ""
	I0729 18:30:02.046431   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.046438   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:02.046443   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:02.046487   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:02.082444   78080 cri.go:89] found id: ""
	I0729 18:30:02.082466   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.082476   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:02.082482   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:02.082551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:02.116013   78080 cri.go:89] found id: ""
	I0729 18:30:02.116041   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.116052   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:02.116058   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:02.116127   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:02.155817   78080 cri.go:89] found id: ""
	I0729 18:30:02.155844   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.155854   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:02.155862   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:02.155914   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:02.195518   78080 cri.go:89] found id: ""
	I0729 18:30:02.195548   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.195556   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:02.195563   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:02.195624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:02.228248   78080 cri.go:89] found id: ""
	I0729 18:30:02.228274   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.228283   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:02.228289   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:02.228370   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:02.262441   78080 cri.go:89] found id: ""
	I0729 18:30:02.262469   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.262479   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:02.262486   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:02.262546   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:02.296900   78080 cri.go:89] found id: ""
	I0729 18:30:02.296930   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.296937   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:02.296953   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:02.296965   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:02.352356   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:02.352389   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:02.366336   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:02.366365   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:02.441367   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:02.441389   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:02.441403   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:02.524134   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:02.524173   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:05.071581   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:05.085481   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:05.085535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:05.121610   78080 cri.go:89] found id: ""
	I0729 18:30:05.121636   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.121644   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:05.121652   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:05.121716   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:05.157382   78080 cri.go:89] found id: ""
	I0729 18:30:05.157406   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.157413   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:05.157418   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:05.157478   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:05.195552   78080 cri.go:89] found id: ""
	I0729 18:30:05.195582   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.195593   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:05.195600   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:05.195657   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:05.231071   78080 cri.go:89] found id: ""
	I0729 18:30:05.231095   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.231103   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:05.231108   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:05.231165   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:05.267445   78080 cri.go:89] found id: ""
	I0729 18:30:05.267474   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.267485   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:05.267493   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:05.267555   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:05.304258   78080 cri.go:89] found id: ""
	I0729 18:30:05.304279   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.304286   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:05.304291   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:05.304338   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:05.339155   78080 cri.go:89] found id: ""
	I0729 18:30:05.339176   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.339184   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:05.339190   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:05.339243   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:05.375291   78080 cri.go:89] found id: ""
	I0729 18:30:05.375328   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.375337   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:05.375346   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:05.375361   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:05.446196   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:05.446221   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:05.446236   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:05.529421   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:05.529457   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:05.570234   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:05.570269   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:05.629349   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:05.629391   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:08.151320   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:08.165983   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:08.166045   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:08.205703   78080 cri.go:89] found id: ""
	I0729 18:30:08.205726   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.205733   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:08.205738   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:08.205786   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:08.245919   78080 cri.go:89] found id: ""
	I0729 18:30:08.245946   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.245957   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:08.245964   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:08.246024   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:08.286595   78080 cri.go:89] found id: ""
	I0729 18:30:08.286621   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.286631   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:08.286638   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:08.286700   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:08.330032   78080 cri.go:89] found id: ""
	I0729 18:30:08.330060   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.330070   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:08.330077   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:08.330140   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:08.362535   78080 cri.go:89] found id: ""
	I0729 18:30:08.362567   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.362578   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:08.362586   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:08.362645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:08.397648   78080 cri.go:89] found id: ""
	I0729 18:30:08.397678   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.397688   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:08.397704   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:08.397766   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:08.433615   78080 cri.go:89] found id: ""
	I0729 18:30:08.433693   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.433716   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:08.433734   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:08.433809   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:08.465765   78080 cri.go:89] found id: ""
	I0729 18:30:08.465792   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.465803   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:08.465814   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:08.465829   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:08.536332   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:08.536360   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:08.536375   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:08.613737   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:08.613776   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:08.659707   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:08.659736   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:08.712702   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:08.712736   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:11.226660   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:11.240852   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:11.240919   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:11.277632   78080 cri.go:89] found id: ""
	I0729 18:30:11.277664   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.277675   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:11.277682   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:11.277751   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:11.312458   78080 cri.go:89] found id: ""
	I0729 18:30:11.312478   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.312485   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:11.312491   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:11.312551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:11.350375   78080 cri.go:89] found id: ""
	I0729 18:30:11.350406   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.350416   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:11.350424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:11.350486   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:11.389280   78080 cri.go:89] found id: ""
	I0729 18:30:11.389307   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.389317   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:11.389324   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:11.389382   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:11.424907   78080 cri.go:89] found id: ""
	I0729 18:30:11.424936   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.424944   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:11.424949   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:11.425009   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:11.480686   78080 cri.go:89] found id: ""
	I0729 18:30:11.480713   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.480720   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:11.480726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:11.480778   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:11.514831   78080 cri.go:89] found id: ""
	I0729 18:30:11.514857   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.514864   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:11.514870   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:11.514917   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:11.547930   78080 cri.go:89] found id: ""
	I0729 18:30:11.547955   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.547964   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:11.547974   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:11.547989   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:11.586068   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:11.586098   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:11.646857   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:11.646892   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:11.663549   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:11.663576   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:11.731362   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:11.731383   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:11.731397   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:14.315531   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:14.330485   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:14.330544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:14.363403   78080 cri.go:89] found id: ""
	I0729 18:30:14.363433   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.363444   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:14.363451   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:14.363516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:14.401204   78080 cri.go:89] found id: ""
	I0729 18:30:14.401227   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.401234   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:14.401240   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:14.401301   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:14.436737   78080 cri.go:89] found id: ""
	I0729 18:30:14.436765   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.436775   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:14.436782   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:14.436844   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:14.471376   78080 cri.go:89] found id: ""
	I0729 18:30:14.471403   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.471411   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:14.471419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:14.471478   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:14.506883   78080 cri.go:89] found id: ""
	I0729 18:30:14.506914   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.506925   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:14.506932   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:14.506990   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:14.546444   78080 cri.go:89] found id: ""
	I0729 18:30:14.546469   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.546479   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:14.546486   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:14.546552   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:14.580282   78080 cri.go:89] found id: ""
	I0729 18:30:14.580313   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.580320   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:14.580326   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:14.580387   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:14.614185   78080 cri.go:89] found id: ""
	I0729 18:30:14.614210   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.614220   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:14.614231   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:14.614246   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:14.652588   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:14.652610   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:14.706056   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:14.706090   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:14.719332   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:14.719356   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:14.792087   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:14.792115   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:14.792136   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:17.375639   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:17.389473   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:17.389535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:17.424485   78080 cri.go:89] found id: ""
	I0729 18:30:17.424513   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.424521   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:17.424527   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:17.424572   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:17.461100   78080 cri.go:89] found id: ""
	I0729 18:30:17.461129   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.461136   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:17.461141   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:17.461191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:17.494866   78080 cri.go:89] found id: ""
	I0729 18:30:17.494894   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.494902   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:17.494907   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:17.494983   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:17.529897   78080 cri.go:89] found id: ""
	I0729 18:30:17.529924   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.529934   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:17.529940   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:17.530002   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:17.569870   78080 cri.go:89] found id: ""
	I0729 18:30:17.569897   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.569905   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:17.569910   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:17.569958   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:17.605324   78080 cri.go:89] found id: ""
	I0729 18:30:17.605364   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.605384   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:17.605392   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:17.605457   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:17.640552   78080 cri.go:89] found id: ""
	I0729 18:30:17.640583   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.640595   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:17.640602   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:17.640668   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:17.679769   78080 cri.go:89] found id: ""
	I0729 18:30:17.679800   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.679808   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:17.679827   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:17.679843   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:17.757782   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:17.757814   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:17.803850   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:17.803878   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:17.857987   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:17.858017   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:17.871062   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:17.871086   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:17.940456   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:20.441171   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:20.454752   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:20.454824   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:20.490744   78080 cri.go:89] found id: ""
	I0729 18:30:20.490773   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.490783   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:20.490791   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:20.490853   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:20.524406   78080 cri.go:89] found id: ""
	I0729 18:30:20.524437   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.524448   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:20.524463   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:20.524515   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:20.559225   78080 cri.go:89] found id: ""
	I0729 18:30:20.559257   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.559268   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:20.559275   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:20.559337   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:20.595297   78080 cri.go:89] found id: ""
	I0729 18:30:20.595324   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.595355   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:20.595364   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:20.595436   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:20.632176   78080 cri.go:89] found id: ""
	I0729 18:30:20.632204   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.632215   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:20.632222   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:20.632282   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:20.676600   78080 cri.go:89] found id: ""
	I0729 18:30:20.676625   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.676632   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:20.676638   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:20.676734   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:20.717920   78080 cri.go:89] found id: ""
	I0729 18:30:20.717945   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.717955   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:20.717966   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:20.718021   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:20.756217   78080 cri.go:89] found id: ""
	I0729 18:30:20.756243   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.756253   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:20.756262   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:20.756277   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:20.837150   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:20.837189   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:20.876023   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:20.876050   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:20.932402   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:20.932429   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:20.947422   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:20.947454   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:21.022698   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:23.523141   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:23.538019   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:23.538098   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:23.576953   78080 cri.go:89] found id: ""
	I0729 18:30:23.576979   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.576991   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:23.576998   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:23.577060   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:23.613052   78080 cri.go:89] found id: ""
	I0729 18:30:23.613083   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.613094   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:23.613100   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:23.613170   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:23.648694   78080 cri.go:89] found id: ""
	I0729 18:30:23.648717   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.648725   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:23.648730   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:23.648775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:23.680939   78080 cri.go:89] found id: ""
	I0729 18:30:23.680965   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.680972   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:23.680977   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:23.681032   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:23.716529   78080 cri.go:89] found id: ""
	I0729 18:30:23.716556   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.716564   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:23.716569   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:23.716628   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:23.756833   78080 cri.go:89] found id: ""
	I0729 18:30:23.756860   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.756868   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:23.756873   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:23.756918   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:23.796436   78080 cri.go:89] found id: ""
	I0729 18:30:23.796460   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.796467   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:23.796472   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:23.796519   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:23.839877   78080 cri.go:89] found id: ""
	I0729 18:30:23.839906   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.839914   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:23.839922   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:23.839934   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:23.879423   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:23.879447   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:23.928379   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:23.928408   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:23.942639   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:23.942669   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:24.014068   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:24.014095   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:24.014110   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:26.597923   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:26.610877   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:26.610945   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:26.647550   78080 cri.go:89] found id: ""
	I0729 18:30:26.647579   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.647590   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:26.647598   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:26.647655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:26.681552   78080 cri.go:89] found id: ""
	I0729 18:30:26.681581   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.681589   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:26.681595   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:26.681660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:26.714475   78080 cri.go:89] found id: ""
	I0729 18:30:26.714503   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.714513   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:26.714519   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:26.714588   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:26.748671   78080 cri.go:89] found id: ""
	I0729 18:30:26.748697   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.748707   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:26.748714   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:26.748775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:26.781380   78080 cri.go:89] found id: ""
	I0729 18:30:26.781406   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.781421   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:26.781429   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:26.781483   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:26.815201   78080 cri.go:89] found id: ""
	I0729 18:30:26.815230   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.815243   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:26.815251   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:26.815318   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:26.848600   78080 cri.go:89] found id: ""
	I0729 18:30:26.848628   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.848637   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:26.848644   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:26.848724   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:26.883828   78080 cri.go:89] found id: ""
	I0729 18:30:26.883872   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.883883   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:26.883893   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:26.883908   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:26.936955   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:26.936987   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:26.952212   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:26.952238   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:27.019389   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:27.019413   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:27.019426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:27.095654   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:27.095682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:29.637269   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:29.652138   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:29.652211   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:29.691063   78080 cri.go:89] found id: ""
	I0729 18:30:29.691094   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.691104   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:29.691111   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:29.691173   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:29.725188   78080 cri.go:89] found id: ""
	I0729 18:30:29.725224   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.725232   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:29.725240   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:29.725308   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:29.764118   78080 cri.go:89] found id: ""
	I0729 18:30:29.764149   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.764159   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:29.764167   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:29.764232   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:29.797884   78080 cri.go:89] found id: ""
	I0729 18:30:29.797909   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.797919   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:29.797927   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:29.797989   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:29.838784   78080 cri.go:89] found id: ""
	I0729 18:30:29.838808   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.838815   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:29.838821   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:29.838885   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:29.872394   78080 cri.go:89] found id: ""
	I0729 18:30:29.872420   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.872427   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:29.872433   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:29.872491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:29.908966   78080 cri.go:89] found id: ""
	I0729 18:30:29.908995   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.909012   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:29.909020   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:29.909081   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:29.946322   78080 cri.go:89] found id: ""
	I0729 18:30:29.946344   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.946352   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:29.946371   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:29.946386   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:30.019133   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:30.019166   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:30.019179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:30.096499   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:30.096532   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:30.136487   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:30.136519   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:30.187341   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:30.187374   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:32.703546   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:32.716981   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:32.717042   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:32.753275   78080 cri.go:89] found id: ""
	I0729 18:30:32.753307   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.753318   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:32.753326   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:32.753393   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:32.789075   78080 cri.go:89] found id: ""
	I0729 18:30:32.789105   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.789116   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:32.789123   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:32.789185   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:32.822945   78080 cri.go:89] found id: ""
	I0729 18:30:32.822971   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.822979   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:32.822984   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:32.823033   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:32.856523   78080 cri.go:89] found id: ""
	I0729 18:30:32.856577   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.856589   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:32.856597   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:32.856661   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:32.895768   78080 cri.go:89] found id: ""
	I0729 18:30:32.895798   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.895810   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:32.895817   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:32.895876   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:32.934990   78080 cri.go:89] found id: ""
	I0729 18:30:32.935030   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.935042   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:32.935054   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:32.935132   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:32.970924   78080 cri.go:89] found id: ""
	I0729 18:30:32.970949   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.970957   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:32.970964   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:32.971022   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:33.004133   78080 cri.go:89] found id: ""
	I0729 18:30:33.004164   78080 logs.go:276] 0 containers: []
	W0729 18:30:33.004173   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:33.004182   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:33.004202   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:33.043432   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:33.043467   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:33.095517   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:33.095554   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:33.108859   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:33.108889   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:33.180661   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:33.180681   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:33.180696   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:35.763324   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:35.777060   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:35.777138   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:35.812601   78080 cri.go:89] found id: ""
	I0729 18:30:35.812636   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.812647   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:35.812654   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:35.812719   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:35.848116   78080 cri.go:89] found id: ""
	I0729 18:30:35.848161   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.848172   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:35.848179   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:35.848240   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:35.895786   78080 cri.go:89] found id: ""
	I0729 18:30:35.895817   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.895829   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:35.895837   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:35.895911   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:35.936753   78080 cri.go:89] found id: ""
	I0729 18:30:35.936780   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.936787   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:35.936794   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:35.936848   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:35.971321   78080 cri.go:89] found id: ""
	I0729 18:30:35.971349   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.971358   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:35.971371   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:35.971434   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:36.018702   78080 cri.go:89] found id: ""
	I0729 18:30:36.018725   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.018732   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:36.018737   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:36.018792   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:36.054829   78080 cri.go:89] found id: ""
	I0729 18:30:36.054865   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.054875   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:36.054882   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:36.054948   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:36.087456   78080 cri.go:89] found id: ""
	I0729 18:30:36.087483   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.087492   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:36.087500   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:36.087512   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:36.140919   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:36.140951   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:36.155581   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:36.155614   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:36.227617   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:36.227642   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:36.227669   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:36.304610   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:36.304651   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:38.843099   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:38.857571   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:38.857626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:38.890760   78080 cri.go:89] found id: ""
	I0729 18:30:38.890790   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.890801   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:38.890809   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:38.890884   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:38.932701   78080 cri.go:89] found id: ""
	I0729 18:30:38.932738   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.932748   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:38.932755   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:38.932812   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:38.967379   78080 cri.go:89] found id: ""
	I0729 18:30:38.967406   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.967416   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:38.967430   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:38.967490   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:39.000419   78080 cri.go:89] found id: ""
	I0729 18:30:39.000450   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.000459   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:39.000466   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:39.000528   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:39.033764   78080 cri.go:89] found id: ""
	I0729 18:30:39.033793   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.033802   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:39.033807   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:39.033857   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:39.070904   78080 cri.go:89] found id: ""
	I0729 18:30:39.070933   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.070944   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:39.070951   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:39.071010   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:39.107444   78080 cri.go:89] found id: ""
	I0729 18:30:39.107471   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.107480   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:39.107488   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:39.107549   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:39.141392   78080 cri.go:89] found id: ""
	I0729 18:30:39.141423   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.141436   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:39.141449   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:39.141464   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:39.154874   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:39.154905   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:39.229370   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:39.229396   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:39.229413   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:39.310508   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:39.310538   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:39.352547   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:39.352569   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:41.908463   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:41.922132   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:41.922209   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:41.960404   78080 cri.go:89] found id: ""
	I0729 18:30:41.960431   78080 logs.go:276] 0 containers: []
	W0729 18:30:41.960439   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:41.960444   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:41.960498   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:41.994082   78080 cri.go:89] found id: ""
	I0729 18:30:41.994110   78080 logs.go:276] 0 containers: []
	W0729 18:30:41.994117   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:41.994123   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:41.994177   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:42.030301   78080 cri.go:89] found id: ""
	I0729 18:30:42.030322   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.030330   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:42.030336   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:42.030401   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:42.064310   78080 cri.go:89] found id: ""
	I0729 18:30:42.064339   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.064349   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:42.064356   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:42.064413   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:42.097705   78080 cri.go:89] found id: ""
	I0729 18:30:42.097738   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.097748   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:42.097761   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:42.097819   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:42.133254   78080 cri.go:89] found id: ""
	I0729 18:30:42.133282   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.133292   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:42.133299   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:42.133361   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:42.170028   78080 cri.go:89] found id: ""
	I0729 18:30:42.170054   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.170063   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:42.170075   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:42.170141   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:42.205680   78080 cri.go:89] found id: ""
	I0729 18:30:42.205712   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.205723   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:42.205736   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:42.205749   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:42.246322   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:42.246350   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:42.300852   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:42.300884   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:42.316306   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:42.316333   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:42.389898   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:42.389920   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:42.389934   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:44.971238   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:44.984796   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:44.984846   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:45.021842   78080 cri.go:89] found id: ""
	I0729 18:30:45.021868   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.021877   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:45.021885   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:45.021958   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:45.059353   78080 cri.go:89] found id: ""
	I0729 18:30:45.059377   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.059387   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:45.059394   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:45.059456   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:45.094867   78080 cri.go:89] found id: ""
	I0729 18:30:45.094900   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.094911   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:45.094918   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:45.094974   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:45.128589   78080 cri.go:89] found id: ""
	I0729 18:30:45.128614   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.128622   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:45.128628   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:45.128671   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:45.160137   78080 cri.go:89] found id: ""
	I0729 18:30:45.160165   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.160172   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:45.160177   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:45.160228   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:45.205757   78080 cri.go:89] found id: ""
	I0729 18:30:45.205780   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.205787   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:45.205793   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:45.205840   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:45.250056   78080 cri.go:89] found id: ""
	I0729 18:30:45.250084   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.250091   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:45.250096   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:45.250179   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:45.285349   78080 cri.go:89] found id: ""
	I0729 18:30:45.285372   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.285380   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:45.285389   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:45.285401   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:45.364188   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:45.364218   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:45.412638   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:45.412660   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:45.467713   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:45.467745   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:45.483811   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:45.483835   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:45.564866   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:48.065579   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:48.079441   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:48.079511   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:48.115540   78080 cri.go:89] found id: ""
	I0729 18:30:48.115569   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.115578   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:48.115586   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:48.115670   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:48.151810   78080 cri.go:89] found id: ""
	I0729 18:30:48.151834   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.151841   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:48.151847   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:48.151913   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:48.187459   78080 cri.go:89] found id: ""
	I0729 18:30:48.187490   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.187500   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:48.187508   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:48.187568   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:48.226804   78080 cri.go:89] found id: ""
	I0729 18:30:48.226835   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.226846   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:48.226853   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:48.226916   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:48.260413   78080 cri.go:89] found id: ""
	I0729 18:30:48.260439   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.260448   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:48.260455   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:48.260517   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:48.296719   78080 cri.go:89] found id: ""
	I0729 18:30:48.296743   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.296751   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:48.296756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:48.296806   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:48.331969   78080 cri.go:89] found id: ""
	I0729 18:30:48.331995   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.332002   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:48.332008   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:48.332055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:48.370593   78080 cri.go:89] found id: ""
	I0729 18:30:48.370618   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.370626   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:48.370634   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:48.370645   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:48.410653   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:48.410679   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:48.465467   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:48.465503   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:48.480025   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:48.480053   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:48.557806   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:48.557824   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:48.557840   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:51.140743   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:51.153970   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:51.154046   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:51.187826   78080 cri.go:89] found id: ""
	I0729 18:30:51.187851   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.187862   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:51.187868   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:51.187922   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:51.226140   78080 cri.go:89] found id: ""
	I0729 18:30:51.226172   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.226182   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:51.226189   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:51.226255   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:51.262321   78080 cri.go:89] found id: ""
	I0729 18:30:51.262349   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.262357   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:51.262378   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:51.262440   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:51.295356   78080 cri.go:89] found id: ""
	I0729 18:30:51.295383   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.295395   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:51.295403   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:51.295467   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:51.328320   78080 cri.go:89] found id: ""
	I0729 18:30:51.328349   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.328361   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:51.328367   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:51.328424   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:51.364202   78080 cri.go:89] found id: ""
	I0729 18:30:51.364233   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.364242   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:51.364249   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:51.364313   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:51.405500   78080 cri.go:89] found id: ""
	I0729 18:30:51.405529   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.405538   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:51.405544   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:51.405606   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:51.443519   78080 cri.go:89] found id: ""
	I0729 18:30:51.443541   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.443548   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:51.443556   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:51.443567   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:51.495560   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:51.495599   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:51.512152   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:51.512178   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:51.590972   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:51.590992   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:51.591021   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:51.688717   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:51.688757   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:54.256011   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:54.270602   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:54.270653   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:54.311547   78080 cri.go:89] found id: ""
	I0729 18:30:54.311574   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.311584   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:54.311592   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:54.311655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:54.347559   78080 cri.go:89] found id: ""
	I0729 18:30:54.347591   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.347602   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:54.347610   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:54.347675   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:54.382180   78080 cri.go:89] found id: ""
	I0729 18:30:54.382205   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.382212   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:54.382217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:54.382264   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:54.415560   78080 cri.go:89] found id: ""
	I0729 18:30:54.415587   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.415594   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:54.415600   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:54.415655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:54.450313   78080 cri.go:89] found id: ""
	I0729 18:30:54.450341   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.450351   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:54.450372   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:54.450439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:54.484649   78080 cri.go:89] found id: ""
	I0729 18:30:54.484678   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.484687   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:54.484694   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:54.484741   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:54.520170   78080 cri.go:89] found id: ""
	I0729 18:30:54.520204   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.520212   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:54.520220   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:54.520270   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:54.562724   78080 cri.go:89] found id: ""
	I0729 18:30:54.562753   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.562762   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:54.562772   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:54.562788   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:54.617461   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:54.617498   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:54.630970   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:54.630993   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:54.699332   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:54.699353   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:54.699366   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:54.779240   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:54.779276   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:57.318673   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:57.332789   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:57.332845   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:57.370434   78080 cri.go:89] found id: ""
	I0729 18:30:57.370461   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.370486   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:57.370492   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:57.370547   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:57.420694   78080 cri.go:89] found id: ""
	I0729 18:30:57.420724   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.420735   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:57.420742   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:57.420808   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:57.469245   78080 cri.go:89] found id: ""
	I0729 18:30:57.469271   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.469282   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:57.469288   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:57.469355   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:57.524937   78080 cri.go:89] found id: ""
	I0729 18:30:57.524963   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.524970   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:57.524976   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:57.525031   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:57.566803   78080 cri.go:89] found id: ""
	I0729 18:30:57.566830   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.566840   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:57.566847   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:57.566910   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:57.602786   78080 cri.go:89] found id: ""
	I0729 18:30:57.602814   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.602821   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:57.602826   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:57.602891   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:57.639319   78080 cri.go:89] found id: ""
	I0729 18:30:57.639347   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.639355   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:57.639361   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:57.639408   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:57.672580   78080 cri.go:89] found id: ""
	I0729 18:30:57.672610   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.672621   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:57.672632   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:57.672647   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:57.751550   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:57.751572   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:57.751586   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:57.840057   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:57.840097   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:57.884698   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:57.884737   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:57.944468   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:57.944497   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:00.459605   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:00.473079   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:00.473138   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:00.508492   78080 cri.go:89] found id: ""
	I0729 18:31:00.508525   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.508536   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:00.508543   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:00.508604   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:00.544844   78080 cri.go:89] found id: ""
	I0729 18:31:00.544875   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.544886   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:00.544899   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:00.544960   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:00.578402   78080 cri.go:89] found id: ""
	I0729 18:31:00.578432   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.578443   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:00.578450   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:00.578508   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:00.611886   78080 cri.go:89] found id: ""
	I0729 18:31:00.611913   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.611922   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:00.611928   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:00.611989   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:00.649126   78080 cri.go:89] found id: ""
	I0729 18:31:00.649153   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.649162   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:00.649168   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:00.649229   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:00.686534   78080 cri.go:89] found id: ""
	I0729 18:31:00.686561   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.686571   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:00.686578   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:00.686639   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:00.718656   78080 cri.go:89] found id: ""
	I0729 18:31:00.718680   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.718690   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:00.718696   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:00.718755   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:00.752740   78080 cri.go:89] found id: ""
	I0729 18:31:00.752766   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.752776   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:00.752786   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:00.752800   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:00.804293   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:00.804323   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:00.817988   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:00.818010   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:00.892178   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:00.892210   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:00.892231   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:00.973164   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:00.973199   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:03.512105   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:03.526536   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:03.526602   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:03.561579   78080 cri.go:89] found id: ""
	I0729 18:31:03.561604   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.561614   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:03.561621   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:03.561681   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:03.603995   78080 cri.go:89] found id: ""
	I0729 18:31:03.604019   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.604028   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:03.604033   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:03.604079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:03.640879   78080 cri.go:89] found id: ""
	I0729 18:31:03.640902   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.640910   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:03.640917   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:03.640971   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:03.675262   78080 cri.go:89] found id: ""
	I0729 18:31:03.675288   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.675296   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:03.675302   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:03.675349   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:03.708094   78080 cri.go:89] found id: ""
	I0729 18:31:03.708128   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.708137   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:03.708142   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:03.708190   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:03.748262   78080 cri.go:89] found id: ""
	I0729 18:31:03.748287   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.748298   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:03.748304   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:03.748360   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:03.789758   78080 cri.go:89] found id: ""
	I0729 18:31:03.789788   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.789800   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:03.789806   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:03.789893   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:03.829253   78080 cri.go:89] found id: ""
	I0729 18:31:03.829280   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.829291   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:03.829299   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:03.829317   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:03.883012   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:03.883044   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:03.899264   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:03.899294   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:03.970241   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:03.970261   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:03.970274   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:04.056205   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:04.056244   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:06.604919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:06.619163   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:06.619242   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:06.656939   78080 cri.go:89] found id: ""
	I0729 18:31:06.656970   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.656982   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:06.656989   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:06.657075   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:06.692577   78080 cri.go:89] found id: ""
	I0729 18:31:06.692608   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.692624   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:06.692632   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:06.692695   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:06.730045   78080 cri.go:89] found id: ""
	I0729 18:31:06.730077   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.730088   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:06.730096   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:06.730179   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:06.771794   78080 cri.go:89] found id: ""
	I0729 18:31:06.771820   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.771830   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:06.771838   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:06.771905   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:06.806149   78080 cri.go:89] found id: ""
	I0729 18:31:06.806177   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.806187   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:06.806194   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:06.806252   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:06.851875   78080 cri.go:89] found id: ""
	I0729 18:31:06.851905   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.851923   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:06.851931   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:06.851996   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:06.890335   78080 cri.go:89] found id: ""
	I0729 18:31:06.890382   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.890393   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:06.890399   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:06.890460   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:06.928873   78080 cri.go:89] found id: ""
	I0729 18:31:06.928902   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.928912   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:06.928922   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:06.928935   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:06.944269   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:06.944295   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:07.011658   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:07.011682   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:07.011697   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:07.109899   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:07.109948   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:07.154569   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:07.154600   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:09.709101   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:09.722387   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:09.722461   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:09.760443   78080 cri.go:89] found id: ""
	I0729 18:31:09.760471   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.760481   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:09.760488   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:09.760551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:09.796177   78080 cri.go:89] found id: ""
	I0729 18:31:09.796200   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.796209   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:09.796214   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:09.796264   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:09.831955   78080 cri.go:89] found id: ""
	I0729 18:31:09.831983   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.831990   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:09.831995   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:09.832055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:09.863913   78080 cri.go:89] found id: ""
	I0729 18:31:09.863939   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.863949   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:09.863956   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:09.864014   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:09.897553   78080 cri.go:89] found id: ""
	I0729 18:31:09.897575   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.897583   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:09.897588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:09.897645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:09.935203   78080 cri.go:89] found id: ""
	I0729 18:31:09.935221   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.935228   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:09.935238   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:09.935296   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:09.971098   78080 cri.go:89] found id: ""
	I0729 18:31:09.971125   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.971135   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:09.971142   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:09.971224   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:10.006760   78080 cri.go:89] found id: ""
	I0729 18:31:10.006794   78080 logs.go:276] 0 containers: []
	W0729 18:31:10.006804   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:10.006815   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:10.006830   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:10.056037   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:10.056066   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:10.070633   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:10.070660   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:10.139953   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:10.139983   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:10.140002   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:10.220748   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:10.220781   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:12.766391   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:12.779837   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:12.779889   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:12.813910   78080 cri.go:89] found id: ""
	I0729 18:31:12.813941   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.813951   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:12.813959   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:12.814008   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:12.848811   78080 cri.go:89] found id: ""
	I0729 18:31:12.848854   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.848865   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:12.848872   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:12.848927   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:12.884740   78080 cri.go:89] found id: ""
	I0729 18:31:12.884769   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.884780   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:12.884786   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:12.884833   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:12.923826   78080 cri.go:89] found id: ""
	I0729 18:31:12.923859   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.923870   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:12.923878   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:12.923930   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:12.959127   78080 cri.go:89] found id: ""
	I0729 18:31:12.959157   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.959168   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:12.959175   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:12.959245   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:12.994384   78080 cri.go:89] found id: ""
	I0729 18:31:12.994417   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.994430   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:12.994439   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:12.994506   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:13.027854   78080 cri.go:89] found id: ""
	I0729 18:31:13.027883   78080 logs.go:276] 0 containers: []
	W0729 18:31:13.027892   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:13.027897   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:13.027951   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:13.062270   78080 cri.go:89] found id: ""
	I0729 18:31:13.062300   78080 logs.go:276] 0 containers: []
	W0729 18:31:13.062310   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:13.062321   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:13.062334   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:13.114473   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:13.114500   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:13.127820   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:13.127845   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:13.195830   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:13.195848   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:13.195862   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:13.281711   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:13.281748   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:15.824456   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:15.837532   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:15.837587   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:15.871706   78080 cri.go:89] found id: ""
	I0729 18:31:15.871739   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.871750   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:15.871757   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:15.871817   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:15.906882   78080 cri.go:89] found id: ""
	I0729 18:31:15.906905   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.906912   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:15.906917   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:15.906976   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:15.943015   78080 cri.go:89] found id: ""
	I0729 18:31:15.943043   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.943057   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:15.943065   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:15.943126   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:15.980501   78080 cri.go:89] found id: ""
	I0729 18:31:15.980528   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.980536   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:15.980542   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:15.980588   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:16.014148   78080 cri.go:89] found id: ""
	I0729 18:31:16.014176   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.014183   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:16.014189   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:16.014236   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:16.048296   78080 cri.go:89] found id: ""
	I0729 18:31:16.048319   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.048326   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:16.048334   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:16.048392   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:16.084328   78080 cri.go:89] found id: ""
	I0729 18:31:16.084350   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.084358   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:16.084363   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:16.084411   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:16.120048   78080 cri.go:89] found id: ""
	I0729 18:31:16.120076   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.120084   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:16.120092   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:16.120105   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:16.173476   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:16.173503   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:16.190200   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:16.190232   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:16.261993   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:16.262014   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:16.262026   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:16.340298   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:16.340331   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:18.883152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:18.897292   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:18.897360   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:18.931276   78080 cri.go:89] found id: ""
	I0729 18:31:18.931303   78080 logs.go:276] 0 containers: []
	W0729 18:31:18.931313   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:18.931321   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:18.931379   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:18.975803   78080 cri.go:89] found id: ""
	I0729 18:31:18.975832   78080 logs.go:276] 0 containers: []
	W0729 18:31:18.975843   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:18.975853   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:18.975912   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:19.012920   78080 cri.go:89] found id: ""
	I0729 18:31:19.012951   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.012963   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:19.012970   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:19.013031   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:19.047640   78080 cri.go:89] found id: ""
	I0729 18:31:19.047667   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.047679   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:19.047687   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:19.047749   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:19.082495   78080 cri.go:89] found id: ""
	I0729 18:31:19.082522   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.082533   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:19.082540   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:19.082591   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:19.117988   78080 cri.go:89] found id: ""
	I0729 18:31:19.118016   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.118027   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:19.118034   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:19.118096   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:19.153725   78080 cri.go:89] found id: ""
	I0729 18:31:19.153753   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.153764   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:19.153771   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:19.153836   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:19.192827   78080 cri.go:89] found id: ""
	I0729 18:31:19.192857   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.192868   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:19.192879   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:19.192894   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:19.208802   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:19.208833   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:19.285877   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:19.285897   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:19.285909   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:19.366563   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:19.366598   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:19.404563   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:19.404590   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:21.958449   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:21.971674   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:21.971739   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:22.006231   78080 cri.go:89] found id: ""
	I0729 18:31:22.006253   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.006261   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:22.006266   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:22.006314   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:22.042575   78080 cri.go:89] found id: ""
	I0729 18:31:22.042599   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.042609   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:22.042616   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:22.042679   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:22.079446   78080 cri.go:89] found id: ""
	I0729 18:31:22.079471   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.079482   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:22.079489   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:22.079554   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:22.115940   78080 cri.go:89] found id: ""
	I0729 18:31:22.115967   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.115976   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:22.115984   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:22.116055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:22.149420   78080 cri.go:89] found id: ""
	I0729 18:31:22.149447   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.149456   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:22.149461   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:22.149511   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:22.182992   78080 cri.go:89] found id: ""
	I0729 18:31:22.183019   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.183027   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:22.183032   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:22.183090   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:22.218441   78080 cri.go:89] found id: ""
	I0729 18:31:22.218474   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.218487   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:22.218497   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:22.218564   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:22.263135   78080 cri.go:89] found id: ""
	I0729 18:31:22.263164   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.263173   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:22.263183   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:22.263198   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:22.319010   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:22.319049   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:22.333151   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:22.333179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:22.404661   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:22.404683   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:22.404706   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:22.488497   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:22.488537   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:25.032215   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:25.045114   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:25.045191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:25.082244   78080 cri.go:89] found id: ""
	I0729 18:31:25.082278   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.082289   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:25.082299   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:25.082388   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:25.118295   78080 cri.go:89] found id: ""
	I0729 18:31:25.118318   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.118325   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:25.118331   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:25.118395   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:25.157948   78080 cri.go:89] found id: ""
	I0729 18:31:25.157974   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.157984   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:25.157992   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:25.158054   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:25.194708   78080 cri.go:89] found id: ""
	I0729 18:31:25.194734   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.194743   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:25.194751   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:25.194813   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:25.235923   78080 cri.go:89] found id: ""
	I0729 18:31:25.235952   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.235962   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:25.235969   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:25.236032   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:25.271316   78080 cri.go:89] found id: ""
	I0729 18:31:25.271342   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.271353   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:25.271360   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:25.271422   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:25.309399   78080 cri.go:89] found id: ""
	I0729 18:31:25.309427   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.309438   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:25.309446   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:25.309503   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:25.347979   78080 cri.go:89] found id: ""
	I0729 18:31:25.348009   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.348021   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:25.348031   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:25.348046   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:25.400785   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:25.400812   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:25.413891   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:25.413915   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:25.487721   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:25.487752   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:25.487767   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:25.575500   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:25.575531   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:28.121761   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:28.136756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:28.136813   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:28.175461   78080 cri.go:89] found id: ""
	I0729 18:31:28.175491   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.175502   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:28.175509   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:28.175567   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:28.215024   78080 cri.go:89] found id: ""
	I0729 18:31:28.215046   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.215055   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:28.215060   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:28.215122   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:28.253999   78080 cri.go:89] found id: ""
	I0729 18:31:28.254023   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.254031   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:28.254037   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:28.254090   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:28.287902   78080 cri.go:89] found id: ""
	I0729 18:31:28.287929   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.287940   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:28.287948   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:28.288006   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:28.322390   78080 cri.go:89] found id: ""
	I0729 18:31:28.322422   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.322433   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:28.322441   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:28.322500   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:28.356951   78080 cri.go:89] found id: ""
	I0729 18:31:28.356980   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.356991   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:28.356999   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:28.357060   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:28.393439   78080 cri.go:89] found id: ""
	I0729 18:31:28.393461   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.393471   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:28.393477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:28.393535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:28.431827   78080 cri.go:89] found id: ""
	I0729 18:31:28.431858   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.431868   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:28.431878   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:28.431892   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:28.509279   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:28.509315   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:28.564036   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:28.564064   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:28.626970   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:28.627000   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:28.641417   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:28.641446   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:28.713406   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:31.213942   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:31.228942   78080 kubeadm.go:597] duration metric: took 4m3.040952507s to restartPrimaryControlPlane
	W0729 18:31:31.229020   78080 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:31:31.229042   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:31:31.696335   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:31.711230   78080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:31:31.720924   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:31:31.730348   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:31:31.730378   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:31:31.730418   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:31:31.739761   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:31:31.739810   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:31:31.749021   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:31:31.758107   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:31:31.758155   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:31:31.768326   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:31:31.777347   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:31:31.777388   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:31:31.786752   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:31:31.795728   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:31:31.795776   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:31:31.805369   78080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:31:31.883678   78080 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:31:31.883751   78080 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:31:32.040989   78080 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:31:32.041127   78080 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:31:32.041259   78080 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:31:32.261525   78080 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:31:32.263137   78080 out.go:204]   - Generating certificates and keys ...
	I0729 18:31:32.263242   78080 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:31:32.263349   78080 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:31:32.263461   78080 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:31:32.263554   78080 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:31:32.263640   78080 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:31:32.263724   78080 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:31:32.263801   78080 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:31:32.263872   78080 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:31:32.263993   78080 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:31:32.264109   78080 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:31:32.264164   78080 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:31:32.264255   78080 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:31:32.435248   78080 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:31:32.509478   78080 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:31:32.737003   78080 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:31:33.079523   78080 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:31:33.099871   78080 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:31:33.101450   78080 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:31:33.101520   78080 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:31:33.242577   78080 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:31:33.244436   78080 out.go:204]   - Booting up control plane ...
	I0729 18:31:33.244570   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:31:33.245677   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:31:33.249530   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:31:33.250262   78080 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:31:33.261418   78080 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:32:13.262907   78080 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:32:13.263487   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:13.263679   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:18.264261   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:18.264554   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:28.265190   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:28.265433   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:48.266531   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:48.266736   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:33:28.269122   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:33:28.269375   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:33:28.269399   78080 kubeadm.go:310] 
	I0729 18:33:28.269433   78080 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:33:28.269471   78080 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:33:28.269480   78080 kubeadm.go:310] 
	I0729 18:33:28.269508   78080 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:33:28.269541   78080 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:33:28.269686   78080 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:33:28.269698   78080 kubeadm.go:310] 
	I0729 18:33:28.269846   78080 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:33:28.269902   78080 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:33:28.269946   78080 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:33:28.269969   78080 kubeadm.go:310] 
	I0729 18:33:28.270132   78080 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:33:28.270246   78080 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:33:28.270258   78080 kubeadm.go:310] 
	I0729 18:33:28.270434   78080 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:33:28.270567   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:33:28.270674   78080 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:33:28.270774   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:33:28.270784   78080 kubeadm.go:310] 
	I0729 18:33:28.271347   78080 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:33:28.271428   78080 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:33:28.271503   78080 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 18:33:28.271650   78080 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 18:33:28.271713   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:33:28.743675   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:33:28.759228   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:33:28.768522   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:33:28.768546   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:33:28.768593   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:33:28.777423   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:33:28.777481   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:33:28.786450   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:33:28.795335   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:33:28.795386   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:33:28.804519   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:33:28.813137   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:33:28.813193   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:33:28.822053   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:33:28.830463   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:33:28.830513   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:33:28.839818   78080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:33:29.066010   78080 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:35:25.197434   78080 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:35:25.197566   78080 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 18:35:25.199476   78080 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:35:25.199554   78080 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:35:25.199667   78080 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:35:25.199800   78080 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:35:25.199937   78080 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:35:25.200054   78080 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:35:25.201801   78080 out.go:204]   - Generating certificates and keys ...
	I0729 18:35:25.201875   78080 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:35:25.201944   78080 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:35:25.202073   78080 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:35:25.202136   78080 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:35:25.202231   78080 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:35:25.202287   78080 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:35:25.202339   78080 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:35:25.202426   78080 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:35:25.202492   78080 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:35:25.202560   78080 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:35:25.202603   78080 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:35:25.202692   78080 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:35:25.202779   78080 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:35:25.202863   78080 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:35:25.202962   78080 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:35:25.203070   78080 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:35:25.203213   78080 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:35:25.203289   78080 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:35:25.203323   78080 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:35:25.203381   78080 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:35:25.204837   78080 out.go:204]   - Booting up control plane ...
	I0729 18:35:25.204920   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:35:25.204985   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:35:25.205053   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:35:25.205146   78080 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:35:25.205274   78080 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:35:25.205316   78080 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:35:25.205379   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.205591   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.205658   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.205828   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.205926   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206142   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206204   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206411   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206488   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206683   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206698   78080 kubeadm.go:310] 
	I0729 18:35:25.206755   78080 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:35:25.206817   78080 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:35:25.206827   78080 kubeadm.go:310] 
	I0729 18:35:25.206860   78080 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:35:25.206890   78080 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:35:25.206975   78080 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:35:25.206985   78080 kubeadm.go:310] 
	I0729 18:35:25.207099   78080 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:35:25.207134   78080 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:35:25.207167   78080 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:35:25.207177   78080 kubeadm.go:310] 
	I0729 18:35:25.207289   78080 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:35:25.207403   78080 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:35:25.207412   78080 kubeadm.go:310] 
	I0729 18:35:25.207532   78080 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:35:25.207640   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:35:25.207754   78080 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:35:25.207821   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:35:25.207854   78080 kubeadm.go:310] 
	I0729 18:35:25.207886   78080 kubeadm.go:394] duration metric: took 7m57.080498205s to StartCluster
	I0729 18:35:25.207923   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:35:25.207983   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:35:25.251803   78080 cri.go:89] found id: ""
	I0729 18:35:25.251841   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.251852   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:35:25.251859   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:35:25.251920   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:35:25.287842   78080 cri.go:89] found id: ""
	I0729 18:35:25.287877   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.287895   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:35:25.287903   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:35:25.287967   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:35:25.324546   78080 cri.go:89] found id: ""
	I0729 18:35:25.324573   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.324582   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:35:25.324588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:35:25.324634   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:35:25.375723   78080 cri.go:89] found id: ""
	I0729 18:35:25.375746   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.375753   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:35:25.375759   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:35:25.375812   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:35:25.412580   78080 cri.go:89] found id: ""
	I0729 18:35:25.412604   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.412612   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:35:25.412617   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:35:25.412664   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:35:25.449360   78080 cri.go:89] found id: ""
	I0729 18:35:25.449397   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.449406   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:35:25.449413   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:35:25.449464   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:35:25.485655   78080 cri.go:89] found id: ""
	I0729 18:35:25.485687   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.485698   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:35:25.485705   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:35:25.485769   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:35:25.521752   78080 cri.go:89] found id: ""
	I0729 18:35:25.521776   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.521783   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:35:25.521792   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:35:25.521808   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:35:25.562894   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:35:25.562922   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:35:25.623879   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:35:25.623912   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:35:25.647315   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:35:25.647341   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:35:25.744827   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:35:25.744850   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:35:25.744865   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0729 18:35:25.849394   78080 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 18:35:25.849445   78080 out.go:239] * 
	* 
	W0729 18:35:25.849520   78080 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:35:25.849558   78080 out.go:239] * 
	* 
	W0729 18:35:25.850438   78080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:35:25.853770   78080 out.go:177] 
	W0729 18:35:25.854982   78080 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:35:25.855035   78080 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 18:35:25.855060   78080 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 18:35:25.856444   78080 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-386663 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386663 -n old-k8s-version-386663
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386663 -n old-k8s-version-386663: exit status 2 (225.854398ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-386663 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-386663 logs -n 25: (1.511918843s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-729010 sudo cat                              | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo find                             | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo crio                             | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-729010                                       | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	| delete  | -p                                                     | disable-driver-mounts-603863 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | disable-driver-mounts-603863                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:19 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-888056             | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-888056                                   | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-409322            | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-502055  | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC |                     |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-386663        | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-888056                  | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-888056 --memory=2200                     | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:33 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-409322                 | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-502055       | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:31 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386663             | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:22:47
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:22:47.218965   78080 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:22:47.219209   78080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:47.219217   78080 out.go:304] Setting ErrFile to fd 2...
	I0729 18:22:47.219222   78080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:47.219370   78080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:22:47.219863   78080 out.go:298] Setting JSON to false
	I0729 18:22:47.220726   78080 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7519,"bootTime":1722269848,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:22:47.220777   78080 start.go:139] virtualization: kvm guest
	I0729 18:22:47.222804   78080 out.go:177] * [old-k8s-version-386663] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:22:47.224119   78080 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 18:22:47.224173   78080 notify.go:220] Checking for updates...
	I0729 18:22:47.226449   78080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:22:47.227676   78080 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:22:47.228809   78080 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:22:47.229914   78080 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:22:47.230906   78080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:22:47.232363   78080 config.go:182] Loaded profile config "old-k8s-version-386663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:22:47.232750   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:47.232814   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:47.247542   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44723
	I0729 18:22:47.247909   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:47.248418   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:22:47.248436   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:47.248786   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:47.248965   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:22:47.250635   78080 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 18:22:47.251760   78080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:22:47.252055   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:47.252098   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:47.266291   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35843
	I0729 18:22:47.266672   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:47.267136   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:22:47.267157   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:47.267492   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:47.267662   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:22:47.303335   78080 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 18:22:47.304503   78080 start.go:297] selected driver: kvm2
	I0729 18:22:47.304513   78080 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:47.304607   78080 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:22:47.305291   78080 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:47.305360   78080 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:22:47.319918   78080 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:22:47.320315   78080 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:22:47.320341   78080 cni.go:84] Creating CNI manager for ""
	I0729 18:22:47.320349   78080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:22:47.320386   78080 start.go:340] cluster config:
	{Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:47.320480   78080 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:47.322357   78080 out.go:177] * Starting "old-k8s-version-386663" primary control-plane node in "old-k8s-version-386663" cluster
	I0729 18:22:43.378634   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:46.450644   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:47.323622   78080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:22:47.323653   78080 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:22:47.323660   78080 cache.go:56] Caching tarball of preloaded images
	I0729 18:22:47.323740   78080 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:22:47.323761   78080 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 18:22:47.323849   78080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json ...
	I0729 18:22:47.324021   78080 start.go:360] acquireMachinesLock for old-k8s-version-386663: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:22:52.530551   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:55.602731   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:01.682636   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:04.754621   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:10.834616   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:13.906688   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:19.986655   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:23.059064   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:29.138659   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:32.210758   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:38.290665   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:41.362732   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:47.442637   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:50.514656   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:56.594611   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:59.666706   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:05.746649   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:08.818685   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:14.898642   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:17.970619   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:24.050664   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:27.122664   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:33.202629   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:36.274678   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:42.354674   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:45.426704   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:51.506670   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:54.578602   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:00.658683   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:03.730663   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:09.810619   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:12.882598   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:18.962612   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:22.034673   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:28.114638   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:31.186598   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:37.266642   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:40.338599   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:46.418679   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:49.490705   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:55.570690   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:58.642719   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:04.722643   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:07.794711   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:13.874638   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:16.946806   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:19.951345   77627 start.go:364] duration metric: took 4m10.060086709s to acquireMachinesLock for "embed-certs-409322"
	I0729 18:26:19.951406   77627 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:19.951414   77627 fix.go:54] fixHost starting: 
	I0729 18:26:19.951732   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:19.951761   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:19.967602   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41827
	I0729 18:26:19.968062   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:19.968486   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:26:19.968505   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:19.968809   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:19.969009   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:19.969135   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:26:19.970757   77627 fix.go:112] recreateIfNeeded on embed-certs-409322: state=Stopped err=<nil>
	I0729 18:26:19.970784   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	W0729 18:26:19.970931   77627 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:19.972631   77627 out.go:177] * Restarting existing kvm2 VM for "embed-certs-409322" ...
	I0729 18:26:19.948656   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:19.948718   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:26:19.949066   77394 buildroot.go:166] provisioning hostname "no-preload-888056"
	I0729 18:26:19.949096   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:26:19.949286   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:26:19.951194   77394 machine.go:97] duration metric: took 4m37.435248922s to provisionDockerMachine
	I0729 18:26:19.951238   77394 fix.go:56] duration metric: took 4m37.45552986s for fixHost
	I0729 18:26:19.951246   77394 start.go:83] releasing machines lock for "no-preload-888056", held for 4m37.455571504s
	W0729 18:26:19.951284   77394 start.go:714] error starting host: provision: host is not running
	W0729 18:26:19.951381   77394 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 18:26:19.951389   77394 start.go:729] Will try again in 5 seconds ...
	I0729 18:26:19.973786   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Start
	I0729 18:26:19.973923   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring networks are active...
	I0729 18:26:19.974594   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring network default is active
	I0729 18:26:19.974930   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring network mk-embed-certs-409322 is active
	I0729 18:26:19.975500   77627 main.go:141] libmachine: (embed-certs-409322) Getting domain xml...
	I0729 18:26:19.976135   77627 main.go:141] libmachine: (embed-certs-409322) Creating domain...
	I0729 18:26:21.186491   77627 main.go:141] libmachine: (embed-certs-409322) Waiting to get IP...
	I0729 18:26:21.187403   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.187857   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.187924   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.187843   78811 retry.go:31] will retry after 218.694883ms: waiting for machine to come up
	I0729 18:26:21.408404   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.408843   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.408872   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.408795   78811 retry.go:31] will retry after 335.138992ms: waiting for machine to come up
	I0729 18:26:21.745329   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.745805   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.745828   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.745759   78811 retry.go:31] will retry after 317.831297ms: waiting for machine to come up
	I0729 18:26:22.065446   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:22.065985   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:22.066024   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:22.065948   78811 retry.go:31] will retry after 557.945634ms: waiting for machine to come up
	I0729 18:26:22.625624   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:22.626020   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:22.626047   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:22.625967   78811 retry.go:31] will retry after 739.991425ms: waiting for machine to come up
	I0729 18:26:23.368166   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:23.368523   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:23.368549   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:23.368477   78811 retry.go:31] will retry after 878.16479ms: waiting for machine to come up
	I0729 18:26:24.248467   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:24.248871   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:24.248895   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:24.248813   78811 retry.go:31] will retry after 1.022542608s: waiting for machine to come up
	I0729 18:26:24.952911   77394 start.go:360] acquireMachinesLock for no-preload-888056: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:26:25.273470   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:25.273886   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:25.273913   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:25.273829   78811 retry.go:31] will retry after 1.313344307s: waiting for machine to come up
	I0729 18:26:26.589378   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:26.589805   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:26.589852   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:26.589769   78811 retry.go:31] will retry after 1.553795128s: waiting for machine to come up
	I0729 18:26:28.145271   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:28.145680   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:28.145704   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:28.145643   78811 retry.go:31] will retry after 1.859680601s: waiting for machine to come up
	I0729 18:26:30.007588   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:30.007988   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:30.008018   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:30.007937   78811 retry.go:31] will retry after 1.754805493s: waiting for machine to come up
	I0729 18:26:31.764527   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:31.765077   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:31.765107   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:31.765030   78811 retry.go:31] will retry after 2.769383357s: waiting for machine to come up
	I0729 18:26:34.536479   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:34.536972   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:34.537007   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:34.536921   78811 retry.go:31] will retry after 3.355218512s: waiting for machine to come up
	I0729 18:26:39.563371   77859 start.go:364] duration metric: took 3m59.712120998s to acquireMachinesLock for "default-k8s-diff-port-502055"
	I0729 18:26:39.563440   77859 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:39.563452   77859 fix.go:54] fixHost starting: 
	I0729 18:26:39.563871   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:39.563914   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:39.580545   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0729 18:26:39.580962   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:39.581492   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:26:39.581518   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:39.581864   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:39.582096   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:39.582290   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:26:39.583857   77859 fix.go:112] recreateIfNeeded on default-k8s-diff-port-502055: state=Stopped err=<nil>
	I0729 18:26:39.583883   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	W0729 18:26:39.584062   77859 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:39.586281   77859 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-502055" ...
	I0729 18:26:39.587651   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Start
	I0729 18:26:39.587814   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring networks are active...
	I0729 18:26:39.588499   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring network default is active
	I0729 18:26:39.588864   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring network mk-default-k8s-diff-port-502055 is active
	I0729 18:26:39.589616   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Getting domain xml...
	I0729 18:26:39.590433   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Creating domain...
	I0729 18:26:37.896070   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.896640   77627 main.go:141] libmachine: (embed-certs-409322) Found IP for machine: 192.168.39.58
	I0729 18:26:37.896664   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has current primary IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.896670   77627 main.go:141] libmachine: (embed-certs-409322) Reserving static IP address...
	I0729 18:26:37.897129   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "embed-certs-409322", mac: "52:54:00:22:9f:57", ip: "192.168.39.58"} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:37.897157   77627 main.go:141] libmachine: (embed-certs-409322) Reserved static IP address: 192.168.39.58
	I0729 18:26:37.897173   77627 main.go:141] libmachine: (embed-certs-409322) DBG | skip adding static IP to network mk-embed-certs-409322 - found existing host DHCP lease matching {name: "embed-certs-409322", mac: "52:54:00:22:9f:57", ip: "192.168.39.58"}
	I0729 18:26:37.897189   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Getting to WaitForSSH function...
	I0729 18:26:37.897206   77627 main.go:141] libmachine: (embed-certs-409322) Waiting for SSH to be available...
	I0729 18:26:37.899216   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.899595   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:37.899616   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.899785   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Using SSH client type: external
	I0729 18:26:37.899808   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa (-rw-------)
	I0729 18:26:37.899845   77627 main.go:141] libmachine: (embed-certs-409322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:26:37.899858   77627 main.go:141] libmachine: (embed-certs-409322) DBG | About to run SSH command:
	I0729 18:26:37.899872   77627 main.go:141] libmachine: (embed-certs-409322) DBG | exit 0
	I0729 18:26:38.026619   77627 main.go:141] libmachine: (embed-certs-409322) DBG | SSH cmd err, output: <nil>: 
	I0729 18:26:38.027028   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetConfigRaw
	I0729 18:26:38.027621   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:38.030532   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.030963   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.030989   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.031243   77627 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/config.json ...
	I0729 18:26:38.031413   77627 machine.go:94] provisionDockerMachine start ...
	I0729 18:26:38.031437   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:38.031642   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.033867   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.034218   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.034251   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.034380   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.034545   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.034682   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.034807   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.034992   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.035175   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.035185   77627 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:26:38.142565   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:26:38.142595   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.142842   77627 buildroot.go:166] provisioning hostname "embed-certs-409322"
	I0729 18:26:38.142872   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.143071   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.145625   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.145951   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.145974   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.146217   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.146423   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.146577   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.146730   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.146861   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.147046   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.147065   77627 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-409322 && echo "embed-certs-409322" | sudo tee /etc/hostname
	I0729 18:26:38.264341   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-409322
	
	I0729 18:26:38.264368   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.266846   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.267144   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.267171   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.267328   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.267488   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.267660   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.267757   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.267936   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.268106   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.268122   77627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-409322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-409322/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-409322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:26:38.383748   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:38.383779   77627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:26:38.383805   77627 buildroot.go:174] setting up certificates
	I0729 18:26:38.383817   77627 provision.go:84] configureAuth start
	I0729 18:26:38.383827   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.384110   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:38.386936   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.387320   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.387348   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.387508   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.389550   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.389871   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.389910   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.389978   77627 provision.go:143] copyHostCerts
	I0729 18:26:38.390039   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:26:38.390052   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:26:38.390137   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:26:38.390257   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:26:38.390268   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:26:38.390308   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:26:38.390406   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:26:38.390416   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:26:38.390456   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:26:38.390526   77627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.embed-certs-409322 san=[127.0.0.1 192.168.39.58 embed-certs-409322 localhost minikube]
	I0729 18:26:38.903674   77627 provision.go:177] copyRemoteCerts
	I0729 18:26:38.903758   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:26:38.903791   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.906662   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.906984   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.907018   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.907171   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.907360   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.907543   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.907667   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:38.992373   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 18:26:39.016465   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:26:39.039598   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:26:39.062415   77627 provision.go:87] duration metric: took 678.589364ms to configureAuth
	I0729 18:26:39.062443   77627 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:26:39.062622   77627 config.go:182] Loaded profile config "embed-certs-409322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:26:39.062696   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.065308   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.065703   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.065728   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.065902   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.066076   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.066244   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.066403   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.066553   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:39.066743   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:39.066759   77627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:26:39.326153   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:26:39.326176   77627 machine.go:97] duration metric: took 1.29475208s to provisionDockerMachine
	I0729 18:26:39.326186   77627 start.go:293] postStartSetup for "embed-certs-409322" (driver="kvm2")
	I0729 18:26:39.326195   77627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:26:39.326209   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.326603   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:26:39.326637   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.329049   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.329448   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.329476   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.329616   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.329822   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.330022   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.330186   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.413084   77627 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:26:39.417438   77627 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:26:39.417462   77627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:26:39.417535   77627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:26:39.417626   77627 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:26:39.417749   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:26:39.427256   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:39.451330   77627 start.go:296] duration metric: took 125.132889ms for postStartSetup
	I0729 18:26:39.451362   77627 fix.go:56] duration metric: took 19.499949606s for fixHost
	I0729 18:26:39.451380   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.453750   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.454047   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.454072   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.454237   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.454416   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.454570   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.454698   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.454864   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:39.455069   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:39.455080   77627 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:26:39.563211   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277599.531173461
	
	I0729 18:26:39.563238   77627 fix.go:216] guest clock: 1722277599.531173461
	I0729 18:26:39.563248   77627 fix.go:229] Guest: 2024-07-29 18:26:39.531173461 +0000 UTC Remote: 2024-07-29 18:26:39.451365859 +0000 UTC m=+269.697720486 (delta=79.807602ms)
	I0729 18:26:39.563278   77627 fix.go:200] guest clock delta is within tolerance: 79.807602ms
	I0729 18:26:39.563287   77627 start.go:83] releasing machines lock for "embed-certs-409322", held for 19.611902888s
	I0729 18:26:39.563318   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.563562   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:39.566225   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.566549   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.566575   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.566766   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567227   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567378   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567460   77627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:26:39.567501   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.567565   77627 ssh_runner.go:195] Run: cat /version.json
	I0729 18:26:39.567593   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.570113   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570330   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570536   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.570558   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570747   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.570754   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.570776   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570883   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.571004   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.571113   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.571211   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.571330   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.571438   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.571478   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.651235   77627 ssh_runner.go:195] Run: systemctl --version
	I0729 18:26:39.677383   77627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:26:39.824036   77627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:26:39.830027   77627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:26:39.830103   77627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:26:39.845939   77627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:26:39.845963   77627 start.go:495] detecting cgroup driver to use...
	I0729 18:26:39.846019   77627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:26:39.862867   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:26:39.878060   77627 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:26:39.878152   77627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:26:39.892471   77627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:26:39.906690   77627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:26:40.039725   77627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:26:40.201419   77627 docker.go:233] disabling docker service ...
	I0729 18:26:40.201489   77627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:26:40.222454   77627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:26:40.237523   77627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:26:40.371463   77627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:26:40.499676   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:26:40.514068   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:26:40.534051   77627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:26:40.534114   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.545364   77627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:26:40.545458   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.557113   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.568215   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.579433   77627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:26:40.591005   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.601933   77627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.621097   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.631960   77627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:26:40.642308   77627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:26:40.642383   77627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:26:40.656469   77627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:26:40.671251   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:26:40.784289   77627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:26:40.933837   77627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:26:40.933910   77627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:26:40.939031   77627 start.go:563] Will wait 60s for crictl version
	I0729 18:26:40.939086   77627 ssh_runner.go:195] Run: which crictl
	I0729 18:26:40.943166   77627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:26:40.985673   77627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:26:40.985753   77627 ssh_runner.go:195] Run: crio --version
	I0729 18:26:41.013973   77627 ssh_runner.go:195] Run: crio --version
	I0729 18:26:41.046080   77627 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:26:40.822462   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting to get IP...
	I0729 18:26:40.823526   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:40.823948   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:40.824000   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:40.823920   78947 retry.go:31] will retry after 262.026124ms: waiting for machine to come up
	I0729 18:26:41.087492   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.087961   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.087991   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.087913   78947 retry.go:31] will retry after 380.066984ms: waiting for machine to come up
	I0729 18:26:41.469728   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.470215   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.470244   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.470181   78947 retry.go:31] will retry after 293.069239ms: waiting for machine to come up
	I0729 18:26:41.764797   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.765277   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.765303   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.765228   78947 retry.go:31] will retry after 491.247116ms: waiting for machine to come up
	I0729 18:26:42.257741   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.258247   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.258275   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:42.258220   78947 retry.go:31] will retry after 693.832082ms: waiting for machine to come up
	I0729 18:26:42.953375   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.954146   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.954169   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:42.954051   78947 retry.go:31] will retry after 710.005115ms: waiting for machine to come up
	I0729 18:26:43.666068   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:43.666478   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:43.666504   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:43.666438   78947 retry.go:31] will retry after 1.077324053s: waiting for machine to come up
	I0729 18:26:41.047322   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:41.049993   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:41.050394   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:41.050433   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:41.050630   77627 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:26:41.054805   77627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:26:41.066926   77627 kubeadm.go:883] updating cluster {Name:embed-certs-409322 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:26:41.067053   77627 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:26:41.067115   77627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:26:41.103417   77627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:26:41.103489   77627 ssh_runner.go:195] Run: which lz4
	I0729 18:26:41.107793   77627 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:26:41.112161   77627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:26:41.112192   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:26:42.559564   77627 crio.go:462] duration metric: took 1.451801292s to copy over tarball
	I0729 18:26:42.559679   77627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:26:44.759513   77627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199801336s)
	I0729 18:26:44.759543   77627 crio.go:469] duration metric: took 2.199942615s to extract the tarball
	I0729 18:26:44.759554   77627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:26:44.744984   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:44.745450   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:44.745477   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:44.745403   78947 retry.go:31] will retry after 1.064257005s: waiting for machine to come up
	I0729 18:26:45.811414   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:45.811840   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:45.811880   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:45.811799   78947 retry.go:31] will retry after 1.30236943s: waiting for machine to come up
	I0729 18:26:47.116252   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:47.116668   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:47.116728   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:47.116647   78947 retry.go:31] will retry after 1.424333691s: waiting for machine to come up
	I0729 18:26:48.543481   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:48.543945   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:48.543973   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:48.543894   78947 retry.go:31] will retry after 2.106061522s: waiting for machine to come up
	I0729 18:26:44.798609   77627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:26:44.848236   77627 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:26:44.848257   77627 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:26:44.848265   77627 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.30.3 crio true true} ...
	I0729 18:26:44.848355   77627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-409322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:26:44.848415   77627 ssh_runner.go:195] Run: crio config
	I0729 18:26:44.901558   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:26:44.901584   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:26:44.901597   77627 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:26:44.901625   77627 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-409322 NodeName:embed-certs-409322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:26:44.901807   77627 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-409322"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:26:44.901875   77627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:26:44.912290   77627 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:26:44.912351   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:26:44.921801   77627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 18:26:44.940473   77627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:26:44.958445   77627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 18:26:44.976890   77627 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I0729 18:26:44.980974   77627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:26:44.994793   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:26:45.120453   77627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:26:45.138398   77627 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322 for IP: 192.168.39.58
	I0729 18:26:45.138419   77627 certs.go:194] generating shared ca certs ...
	I0729 18:26:45.138438   77627 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:26:45.138592   77627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:26:45.138643   77627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:26:45.138657   77627 certs.go:256] generating profile certs ...
	I0729 18:26:45.138751   77627 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/client.key
	I0729 18:26:45.138823   77627 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.key.4af4a6b9
	I0729 18:26:45.138889   77627 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.key
	I0729 18:26:45.139034   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:26:45.139074   77627 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:26:45.139088   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:26:45.139122   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:26:45.139161   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:26:45.139200   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:26:45.139305   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:45.139979   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:26:45.177194   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:26:45.206349   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:26:45.242291   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:26:45.277062   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 18:26:45.312447   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:26:45.345482   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:26:45.369151   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:26:45.394521   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:26:45.418579   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:26:45.443252   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:26:45.466770   77627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:26:45.484159   77627 ssh_runner.go:195] Run: openssl version
	I0729 18:26:45.490045   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:26:45.501166   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.505930   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.505988   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.511926   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:26:45.522860   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:26:45.533560   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.538411   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.538474   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.544485   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:26:45.555603   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:26:45.566407   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.570892   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.570944   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.576555   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:26:45.587780   77627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:26:45.592689   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:26:45.598981   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:26:45.604952   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:26:45.611225   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:26:45.617506   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:26:45.623744   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:26:45.629836   77627 kubeadm.go:392] StartCluster: {Name:embed-certs-409322 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:26:45.629947   77627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:26:45.630003   77627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:26:45.667768   77627 cri.go:89] found id: ""
	I0729 18:26:45.667853   77627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:26:45.678703   77627 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:26:45.678724   77627 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:26:45.678772   77627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:26:45.691979   77627 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:26:45.693237   77627 kubeconfig.go:125] found "embed-certs-409322" server: "https://192.168.39.58:8443"
	I0729 18:26:45.696093   77627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:26:45.708981   77627 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I0729 18:26:45.709017   77627 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:26:45.709030   77627 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:26:45.709088   77627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:26:45.748738   77627 cri.go:89] found id: ""
	I0729 18:26:45.748817   77627 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:26:45.775148   77627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:26:45.786631   77627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:26:45.786651   77627 kubeadm.go:157] found existing configuration files:
	
	I0729 18:26:45.786701   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:26:45.799453   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:26:45.799507   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:26:45.809691   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:26:45.819592   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:26:45.819638   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:26:45.832072   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:26:45.843769   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:26:45.843817   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:26:45.854649   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:26:45.863448   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:26:45.863504   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:26:45.872399   77627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:26:45.881992   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:46.012679   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.143076   77627 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.130359187s)
	I0729 18:26:47.143112   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.370854   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.446808   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.550087   77627 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:26:47.550191   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.050502   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.550499   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.608713   77627 api_server.go:72] duration metric: took 1.058625786s to wait for apiserver process to appear ...
	I0729 18:26:48.608745   77627 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:26:48.608773   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:51.829925   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:26:51.829963   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:26:51.829979   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:51.843474   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:26:51.843503   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:26:52.109882   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:52.117387   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:26:52.117415   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:26:52.608863   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:52.613809   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:26:52.613840   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:26:53.109430   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:53.115353   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I0729 18:26:53.122373   77627 api_server.go:141] control plane version: v1.30.3
	I0729 18:26:53.122411   77627 api_server.go:131] duration metric: took 4.513658045s to wait for apiserver health ...
	I0729 18:26:53.122420   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:26:53.122426   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:26:53.123807   77627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:26:50.651329   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:50.651724   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:50.651753   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:50.651678   78947 retry.go:31] will retry after 3.358167933s: waiting for machine to come up
	I0729 18:26:54.014102   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:54.014543   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:54.014576   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:54.014495   78947 retry.go:31] will retry after 4.372189125s: waiting for machine to come up
	I0729 18:26:53.124953   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:26:53.140970   77627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:26:53.179660   77627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:26:53.193885   77627 system_pods.go:59] 8 kube-system pods found
	I0729 18:26:53.193921   77627 system_pods.go:61] "coredns-7db6d8ff4d-vxvfc" [da2fd5a1-f57f-4374-99ee-9017e228176f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:26:53.193932   77627 system_pods.go:61] "etcd-embed-certs-409322" [3eca462f-6156-4858-a886-30d0d32faa35] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:26:53.193944   77627 system_pods.go:61] "kube-apiserver-embed-certs-409322" [4c6473c7-d7b8-4513-b800-7cab08748d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:26:53.193953   77627 system_pods.go:61] "kube-controller-manager-embed-certs-409322" [2dc47da0-3d24-49d8-91ae-13074468b423] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:26:53.193961   77627 system_pods.go:61] "kube-proxy-zf5jf" [a0b6fd82-d0b1-4821-a668-4cb6420b4860] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 18:26:53.193969   77627 system_pods.go:61] "kube-scheduler-embed-certs-409322" [ab422567-58e6-4f22-a7cf-391b35cc386c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:26:53.193977   77627 system_pods.go:61] "metrics-server-569cc877fc-flh27" [83d6c69c-200d-4ce2-80e9-b83ff5b6ebe9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:26:53.193989   77627 system_pods.go:61] "storage-provisioner" [73ff548f-26c3-4442-a9bd-bdac45261476] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 18:26:53.194002   77627 system_pods.go:74] duration metric: took 14.320361ms to wait for pod list to return data ...
	I0729 18:26:53.194014   77627 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:26:53.197826   77627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:26:53.197858   77627 node_conditions.go:123] node cpu capacity is 2
	I0729 18:26:53.197870   77627 node_conditions.go:105] duration metric: took 3.850077ms to run NodePressure ...
	I0729 18:26:53.197884   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:53.467868   77627 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:26:53.471886   77627 kubeadm.go:739] kubelet initialised
	I0729 18:26:53.471905   77627 kubeadm.go:740] duration metric: took 4.016417ms waiting for restarted kubelet to initialise ...
	I0729 18:26:53.471912   77627 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:26:53.476695   77627 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.480449   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.480481   77627 pod_ready.go:81] duration metric: took 3.766ms for pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.480491   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.480501   77627 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.484712   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "etcd-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.484739   77627 pod_ready.go:81] duration metric: took 4.228077ms for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.484750   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "etcd-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.484759   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.488510   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.488532   77627 pod_ready.go:81] duration metric: took 3.76371ms for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.488539   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.488545   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:58.387940   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.388358   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Found IP for machine: 192.168.61.244
	I0729 18:26:58.388383   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has current primary IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.388396   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Reserving static IP address...
	I0729 18:26:58.388794   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-502055", mac: "52:54:00:ae:63:e1", ip: "192.168.61.244"} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.388826   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Reserved static IP address: 192.168.61.244
	I0729 18:26:58.388848   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | skip adding static IP to network mk-default-k8s-diff-port-502055 - found existing host DHCP lease matching {name: "default-k8s-diff-port-502055", mac: "52:54:00:ae:63:e1", ip: "192.168.61.244"}
	I0729 18:26:58.388873   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for SSH to be available...
	I0729 18:26:58.388894   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Getting to WaitForSSH function...
	I0729 18:26:58.390937   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.391281   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.391319   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.391381   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Using SSH client type: external
	I0729 18:26:58.391408   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa (-rw-------)
	I0729 18:26:58.391457   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:26:58.391490   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | About to run SSH command:
	I0729 18:26:58.391511   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | exit 0
	I0729 18:26:58.518399   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | SSH cmd err, output: <nil>: 
	I0729 18:26:58.518782   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetConfigRaw
	I0729 18:26:58.519492   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:58.522245   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.522580   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.522615   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.522862   77859 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/config.json ...
	I0729 18:26:58.523037   77859 machine.go:94] provisionDockerMachine start ...
	I0729 18:26:58.523053   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:58.523258   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.525654   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.525998   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.526018   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.526185   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.526351   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.526555   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.526705   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.526874   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.527066   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.527079   77859 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:26:58.635267   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:26:58.635302   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.635524   77859 buildroot.go:166] provisioning hostname "default-k8s-diff-port-502055"
	I0729 18:26:58.635550   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.635789   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.638770   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.639235   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.639265   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.639371   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.639564   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.639729   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.639865   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.640048   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.640227   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.640245   77859 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-502055 && echo "default-k8s-diff-port-502055" | sudo tee /etc/hostname
	I0729 18:26:58.760577   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-502055
	
	I0729 18:26:58.760603   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.763294   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.763591   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.763625   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.763766   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.763970   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.764159   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.764311   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.764480   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.764641   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.764659   77859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-502055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-502055/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-502055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:26:58.879366   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:58.879400   77859 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:26:58.879440   77859 buildroot.go:174] setting up certificates
	I0729 18:26:58.879451   77859 provision.go:84] configureAuth start
	I0729 18:26:58.879463   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.879735   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:58.882335   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.882652   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.882680   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.882848   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.885023   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.885313   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.885339   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.885433   77859 provision.go:143] copyHostCerts
	I0729 18:26:58.885479   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:26:58.885488   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:26:58.885544   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:26:58.885633   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:26:58.885641   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:26:58.885660   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:26:58.885709   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:26:58.885716   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:26:58.885733   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:26:58.885783   77859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-502055 san=[127.0.0.1 192.168.61.244 default-k8s-diff-port-502055 localhost minikube]
	I0729 18:26:59.130657   77859 provision.go:177] copyRemoteCerts
	I0729 18:26:59.130724   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:26:59.130749   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.133536   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.133898   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.133922   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.134079   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.134260   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.134421   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.134530   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.216614   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 18:26:59.240540   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:26:59.267350   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:26:59.294003   77859 provision.go:87] duration metric: took 414.539559ms to configureAuth
	I0729 18:26:59.294032   77859 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:26:59.294222   77859 config.go:182] Loaded profile config "default-k8s-diff-port-502055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:26:59.294293   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.296911   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.297285   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.297311   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.297450   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.297656   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.297804   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.297935   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.298102   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:59.298265   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:59.298281   77859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:26:59.557084   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:26:59.557131   77859 machine.go:97] duration metric: took 1.034080964s to provisionDockerMachine
	I0729 18:26:59.557148   77859 start.go:293] postStartSetup for "default-k8s-diff-port-502055" (driver="kvm2")
	I0729 18:26:59.557165   77859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:26:59.557191   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.557496   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:26:59.557529   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.559962   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.560255   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.560276   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.560461   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.560635   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.560798   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.560953   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.645623   77859 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:26:59.650416   77859 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:26:59.650447   77859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:26:59.650531   77859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:26:59.650624   77859 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:26:59.650730   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:26:59.660864   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:59.685728   77859 start.go:296] duration metric: took 128.564534ms for postStartSetup
	I0729 18:26:59.685767   77859 fix.go:56] duration metric: took 20.122314731s for fixHost
	I0729 18:26:59.685791   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.688401   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.688773   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.688801   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.688978   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.689157   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.689293   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.689401   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.689551   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:59.689712   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:59.689722   77859 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:26:55.494570   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:26:57.495784   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:26:59.799712   78080 start.go:364] duration metric: took 4m12.475660562s to acquireMachinesLock for "old-k8s-version-386663"
	I0729 18:26:59.799786   78080 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:59.799796   78080 fix.go:54] fixHost starting: 
	I0729 18:26:59.800184   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:59.800215   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:59.816885   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37963
	I0729 18:26:59.817336   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:59.817822   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:26:59.817851   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:59.818283   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:59.818505   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:26:59.818671   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetState
	I0729 18:26:59.820232   78080 fix.go:112] recreateIfNeeded on old-k8s-version-386663: state=Stopped err=<nil>
	I0729 18:26:59.820254   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	W0729 18:26:59.820426   78080 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:59.822140   78080 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-386663" ...
	I0729 18:26:59.799573   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277619.755982716
	
	I0729 18:26:59.799603   77859 fix.go:216] guest clock: 1722277619.755982716
	I0729 18:26:59.799614   77859 fix.go:229] Guest: 2024-07-29 18:26:59.755982716 +0000 UTC Remote: 2024-07-29 18:26:59.685771603 +0000 UTC m=+259.980298680 (delta=70.211113ms)
	I0729 18:26:59.799637   77859 fix.go:200] guest clock delta is within tolerance: 70.211113ms
	I0729 18:26:59.799641   77859 start.go:83] releasing machines lock for "default-k8s-diff-port-502055", held for 20.236230068s
	I0729 18:26:59.799672   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.799944   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:59.802636   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.802983   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.803013   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.803248   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.803740   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.803927   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.804023   77859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:26:59.804070   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.804193   77859 ssh_runner.go:195] Run: cat /version.json
	I0729 18:26:59.804229   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.807037   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807117   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807395   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.807435   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807528   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.807547   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.807565   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807708   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.807717   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.807910   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.807936   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.808043   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.808098   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.808244   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.920371   77859 ssh_runner.go:195] Run: systemctl --version
	I0729 18:26:59.926620   77859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:00.072161   77859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:00.079273   77859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:00.079340   77859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:00.096528   77859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:00.096550   77859 start.go:495] detecting cgroup driver to use...
	I0729 18:27:00.096610   77859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:00.113690   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:00.129058   77859 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:00.129126   77859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:00.143930   77859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:00.158085   77859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:00.296398   77859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:00.482313   77859 docker.go:233] disabling docker service ...
	I0729 18:27:00.482459   77859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:00.501504   77859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:00.520932   77859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:00.657805   77859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:00.792064   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:00.807790   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:00.827373   77859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:27:00.827423   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.838281   77859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:00.838340   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.849533   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.860820   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.872359   77859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:00.883904   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.895589   77859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.914639   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.926278   77859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:00.936329   77859 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:00.936383   77859 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:00.951219   77859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:00.966530   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:01.086665   77859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:01.233627   77859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:01.233703   77859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:01.241055   77859 start.go:563] Will wait 60s for crictl version
	I0729 18:27:01.241122   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:27:01.244875   77859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:01.284013   77859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:01.284103   77859 ssh_runner.go:195] Run: crio --version
	I0729 18:27:01.315493   77859 ssh_runner.go:195] Run: crio --version
	I0729 18:27:01.348781   77859 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:26:59.823421   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .Start
	I0729 18:26:59.823575   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring networks are active...
	I0729 18:26:59.824264   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring network default is active
	I0729 18:26:59.824641   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring network mk-old-k8s-version-386663 is active
	I0729 18:26:59.825024   78080 main.go:141] libmachine: (old-k8s-version-386663) Getting domain xml...
	I0729 18:26:59.825885   78080 main.go:141] libmachine: (old-k8s-version-386663) Creating domain...
	I0729 18:27:01.104265   78080 main.go:141] libmachine: (old-k8s-version-386663) Waiting to get IP...
	I0729 18:27:01.105349   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.105790   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.105836   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.105761   79098 retry.go:31] will retry after 308.255094ms: waiting for machine to come up
	I0729 18:27:01.415431   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.415999   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.416030   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.415952   79098 retry.go:31] will retry after 236.525723ms: waiting for machine to come up
	I0729 18:27:01.654767   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.655279   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.655312   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.655247   79098 retry.go:31] will retry after 311.010394ms: waiting for machine to come up
	I0729 18:27:01.967850   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.968374   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.968404   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.968333   79098 retry.go:31] will retry after 468.477549ms: waiting for machine to come up
	I0729 18:27:01.350059   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:27:01.352945   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:01.353398   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:01.353429   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:01.353630   77859 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:01.357955   77859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:01.371879   77859 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-502055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:01.372034   77859 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:27:01.372100   77859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:01.412356   77859 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:27:01.412423   77859 ssh_runner.go:195] Run: which lz4
	I0729 18:27:01.417768   77859 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:27:01.422809   77859 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:27:01.422836   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:27:02.909800   77859 crio.go:462] duration metric: took 1.492088664s to copy over tarball
	I0729 18:27:02.909868   77859 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:26:59.995351   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:01.999130   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:04.012357   77627 pod_ready.go:92] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.012385   77627 pod_ready.go:81] duration metric: took 10.523832262s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.012398   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zf5jf" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.025409   77627 pod_ready.go:92] pod "kube-proxy-zf5jf" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.025448   77627 pod_ready.go:81] duration metric: took 13.042254ms for pod "kube-proxy-zf5jf" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.025461   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.036057   77627 pod_ready.go:92] pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.036078   77627 pod_ready.go:81] duration metric: took 10.608531ms for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.036090   77627 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:02.438066   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:02.438657   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:02.438686   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:02.438618   79098 retry.go:31] will retry after 601.056921ms: waiting for machine to come up
	I0729 18:27:03.041582   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:03.042097   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:03.042127   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:03.042040   79098 retry.go:31] will retry after 712.049848ms: waiting for machine to come up
	I0729 18:27:03.755536   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:03.756010   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:03.756040   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:03.755988   79098 retry.go:31] will retry after 1.092318096s: waiting for machine to come up
	I0729 18:27:04.849745   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:04.850202   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:04.850226   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:04.850147   79098 retry.go:31] will retry after 903.54457ms: waiting for machine to come up
	I0729 18:27:05.754781   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:05.755193   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:05.755218   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:05.755157   79098 retry.go:31] will retry after 1.693512671s: waiting for machine to come up
	I0729 18:27:05.188101   77859 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.27820184s)
	I0729 18:27:05.188132   77859 crio.go:469] duration metric: took 2.278304723s to extract the tarball
	I0729 18:27:05.188140   77859 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:27:05.227453   77859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:05.274530   77859 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:27:05.274560   77859 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:27:05.274571   77859 kubeadm.go:934] updating node { 192.168.61.244 8444 v1.30.3 crio true true} ...
	I0729 18:27:05.274708   77859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-502055 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:05.274788   77859 ssh_runner.go:195] Run: crio config
	I0729 18:27:05.320697   77859 cni.go:84] Creating CNI manager for ""
	I0729 18:27:05.320725   77859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:05.320741   77859 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:05.320774   77859 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.244 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-502055 NodeName:default-k8s-diff-port-502055 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:27:05.320948   77859 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.244
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-502055"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:05.321028   77859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:27:05.331541   77859 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:05.331609   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:05.341433   77859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 18:27:05.358696   77859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:27:05.376531   77859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 18:27:05.394349   77859 ssh_runner.go:195] Run: grep 192.168.61.244	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:05.398156   77859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:05.411839   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:05.561467   77859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:05.583184   77859 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055 for IP: 192.168.61.244
	I0729 18:27:05.583209   77859 certs.go:194] generating shared ca certs ...
	I0729 18:27:05.583251   77859 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:05.583406   77859 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:05.583460   77859 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:05.583473   77859 certs.go:256] generating profile certs ...
	I0729 18:27:05.583577   77859 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/client.key
	I0729 18:27:05.583642   77859 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.key.2edc4448
	I0729 18:27:05.583692   77859 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.key
	I0729 18:27:05.583835   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:05.583872   77859 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:05.583886   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:05.583917   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:05.583957   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:05.583991   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:05.584048   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:05.584726   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:05.624996   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:05.670153   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:05.715354   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:05.743807   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 18:27:05.777366   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:05.802152   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:05.826974   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:27:05.850417   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:05.873185   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:05.899387   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:05.927963   77859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:05.947817   77859 ssh_runner.go:195] Run: openssl version
	I0729 18:27:05.955635   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:05.969765   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.974559   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.974606   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.980557   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:05.991819   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:06.004961   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.009999   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.010074   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.016045   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:06.027698   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:06.039648   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.045057   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.045130   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.051127   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:06.062761   77859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:06.068832   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:06.076652   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:06.084517   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:06.091125   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:06.097346   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:06.103428   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:06.109312   77859 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-502055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:06.109403   77859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:06.109440   77859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:06.153439   77859 cri.go:89] found id: ""
	I0729 18:27:06.153528   77859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:06.166412   77859 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:06.166434   77859 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:06.166486   77859 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:06.183064   77859 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:06.184168   77859 kubeconfig.go:125] found "default-k8s-diff-port-502055" server: "https://192.168.61.244:8444"
	I0729 18:27:06.186283   77859 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:06.197418   77859 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.244
	I0729 18:27:06.197444   77859 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:06.197454   77859 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:06.197506   77859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:06.237753   77859 cri.go:89] found id: ""
	I0729 18:27:06.237839   77859 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:06.257323   77859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:06.269157   77859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:06.269176   77859 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:06.269229   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 18:27:06.279313   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:06.279369   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:06.292141   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 18:27:06.303961   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:06.304028   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:06.316051   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 18:27:06.328004   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:06.328064   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:06.340357   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 18:27:06.352021   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:06.352068   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:06.364479   77859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:06.375313   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:06.498692   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:07.853845   77859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.355105254s)
	I0729 18:27:07.853882   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.069616   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.144574   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.225236   77859 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:08.225336   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:08.725789   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:09.226271   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:09.270268   77859 api_server.go:72] duration metric: took 1.045028259s to wait for apiserver process to appear ...
	I0729 18:27:09.270298   77859 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:27:09.270320   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:09.270877   77859 api_server.go:269] stopped: https://192.168.61.244:8444/healthz: Get "https://192.168.61.244:8444/healthz": dial tcp 192.168.61.244:8444: connect: connection refused
	I0729 18:27:06.043838   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:08.044382   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:07.451087   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:07.451659   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:07.451688   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:07.451607   79098 retry.go:31] will retry after 1.734643072s: waiting for machine to come up
	I0729 18:27:09.188407   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:09.188963   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:09.188997   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:09.188900   79098 retry.go:31] will retry after 2.010973572s: waiting for machine to come up
	I0729 18:27:11.201171   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:11.201586   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:11.201620   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:11.201535   79098 retry.go:31] will retry after 3.178533437s: waiting for machine to come up
	I0729 18:27:09.771273   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.506136   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:27:12.506166   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:27:12.506179   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.518847   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:27:12.518881   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:27:12.771281   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.775798   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:27:12.775832   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:27:13.270383   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:13.281935   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:27:13.281975   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:27:13.770440   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:13.776004   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 200:
	ok
	I0729 18:27:13.783210   77859 api_server.go:141] control plane version: v1.30.3
	I0729 18:27:13.783237   77859 api_server.go:131] duration metric: took 4.512933596s to wait for apiserver health ...
	I0729 18:27:13.783247   77859 cni.go:84] Creating CNI manager for ""
	I0729 18:27:13.783253   77859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:13.785148   77859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:27:13.786485   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:27:13.814986   77859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:27:13.860557   77859 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:27:13.872823   77859 system_pods.go:59] 8 kube-system pods found
	I0729 18:27:13.872864   77859 system_pods.go:61] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:27:13.872871   77859 system_pods.go:61] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:27:13.872879   77859 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:27:13.872885   77859 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:27:13.872891   77859 system_pods.go:61] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 18:27:13.872898   77859 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:27:13.872903   77859 system_pods.go:61] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:27:13.872912   77859 system_pods.go:61] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 18:27:13.872920   77859 system_pods.go:74] duration metric: took 12.342162ms to wait for pod list to return data ...
	I0729 18:27:13.872929   77859 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:27:13.879353   77859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:27:13.879384   77859 node_conditions.go:123] node cpu capacity is 2
	I0729 18:27:13.879396   77859 node_conditions.go:105] duration metric: took 6.459994ms to run NodePressure ...
	I0729 18:27:13.879416   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:14.172203   77859 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:27:14.178467   77859 kubeadm.go:739] kubelet initialised
	I0729 18:27:14.178490   77859 kubeadm.go:740] duration metric: took 6.259862ms waiting for restarted kubelet to initialise ...
	I0729 18:27:14.178499   77859 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:14.184872   77859 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.190847   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.190871   77859 pod_ready.go:81] duration metric: took 5.974917ms for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.190879   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.190886   77859 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.195570   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.195593   77859 pod_ready.go:81] duration metric: took 4.699847ms for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.195603   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.195610   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.199460   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.199480   77859 pod_ready.go:81] duration metric: took 3.863218ms for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.199489   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.199494   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.264725   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.264759   77859 pod_ready.go:81] duration metric: took 65.256372ms for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.264774   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.264781   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.664064   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-proxy-cgdm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.664089   77859 pod_ready.go:81] duration metric: took 399.300184ms for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.664100   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-proxy-cgdm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.664109   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:10.044797   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:12.543553   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:15.064029   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.064059   77859 pod_ready.go:81] duration metric: took 399.939139ms for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:15.064074   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.064082   77859 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:15.464538   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.464564   77859 pod_ready.go:81] duration metric: took 400.472397ms for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:15.464584   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.464592   77859 pod_ready.go:38] duration metric: took 1.286083847s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:15.464609   77859 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:27:15.478197   77859 ops.go:34] apiserver oom_adj: -16
	I0729 18:27:15.478220   77859 kubeadm.go:597] duration metric: took 9.311779975s to restartPrimaryControlPlane
	I0729 18:27:15.478229   77859 kubeadm.go:394] duration metric: took 9.368934157s to StartCluster
	I0729 18:27:15.478247   77859 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:15.478311   77859 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:27:15.479920   77859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:15.480159   77859 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:27:15.480244   77859 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:27:15.480322   77859 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-502055"
	I0729 18:27:15.480355   77859 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-502055"
	I0729 18:27:15.480356   77859 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-502055"
	W0729 18:27:15.480368   77859 addons.go:243] addon storage-provisioner should already be in state true
	I0729 18:27:15.480371   77859 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-502055"
	I0729 18:27:15.480396   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.480397   77859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-502055"
	I0729 18:27:15.480402   77859 config.go:182] Loaded profile config "default-k8s-diff-port-502055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:27:15.480415   77859 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-502055"
	W0729 18:27:15.480426   77859 addons.go:243] addon metrics-server should already be in state true
	I0729 18:27:15.480460   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.480709   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480723   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480738   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.480738   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.480914   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480943   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.482004   77859 out.go:177] * Verifying Kubernetes components...
	I0729 18:27:15.483504   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:15.495748   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35469
	I0729 18:27:15.495965   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43251
	I0729 18:27:15.495977   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
	I0729 18:27:15.496147   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496324   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496433   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496604   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496622   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496760   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496778   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496914   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496930   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496982   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497086   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497219   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497644   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.497672   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.498076   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.498408   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.498449   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.501769   77859 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-502055"
	W0729 18:27:15.501790   77859 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:27:15.501814   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.502132   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.502163   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.516862   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0729 18:27:15.517070   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0729 18:27:15.517336   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.517525   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.517845   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.517877   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.518255   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.518356   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.518418   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.518657   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.518793   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.519009   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.520045   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I0729 18:27:15.520489   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.520613   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.520785   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.520962   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.520979   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.521295   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.521697   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.521712   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.522950   77859 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:15.522950   77859 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:27:15.524246   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:27:15.524268   77859 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:27:15.524291   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.524355   77859 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:27:15.524370   77859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:27:15.524388   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.527946   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528008   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528609   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.528645   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528678   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.528691   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528723   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.528939   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.528953   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.529101   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.529150   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.529218   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.529524   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.529716   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.539969   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41273
	I0729 18:27:15.540410   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.540887   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.540913   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.541351   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.541675   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.543494   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.543728   77859 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:27:15.543744   77859 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:27:15.543762   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.546809   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.547225   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.547250   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.547405   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.547595   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.547736   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.547859   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.662741   77859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:15.681179   77859 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-502055" to be "Ready" ...
	I0729 18:27:15.754691   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:27:15.767498   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:27:15.767515   77859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:27:15.781857   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:27:15.801619   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:27:15.801645   77859 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:27:15.823663   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:27:15.823690   77859 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:27:15.847827   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:27:16.818178   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.063432468s)
	I0729 18:27:16.818180   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.036288517s)
	I0729 18:27:16.818268   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818234   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818290   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818307   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818677   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.818680   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.818694   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.818710   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.818723   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818724   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.818735   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818740   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.818755   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818766   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818989   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.819000   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.819004   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.819017   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.819014   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.824028   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.824047   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.824268   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.824292   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.877321   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029455089s)
	I0729 18:27:16.877378   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.877393   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.877718   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.877767   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.877790   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.877801   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.878030   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.878047   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.878061   77859 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-502055"
	I0729 18:27:16.879704   77859 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 18:27:14.381238   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:14.381648   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:14.381677   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:14.381609   79098 retry.go:31] will retry after 4.005160817s: waiting for machine to come up
	I0729 18:27:16.880972   77859 addons.go:510] duration metric: took 1.400728317s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 18:27:17.685480   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:19.687853   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.042487   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:17.043250   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:19.045374   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:19.859418   77394 start.go:364] duration metric: took 54.906462088s to acquireMachinesLock for "no-preload-888056"
	I0729 18:27:19.859470   77394 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:27:19.859478   77394 fix.go:54] fixHost starting: 
	I0729 18:27:19.859850   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:19.859896   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:19.876798   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46323
	I0729 18:27:19.877254   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:19.877674   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:27:19.877709   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:19.878087   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:19.878257   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:19.878399   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:27:19.879875   77394 fix.go:112] recreateIfNeeded on no-preload-888056: state=Stopped err=<nil>
	I0729 18:27:19.879909   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	W0729 18:27:19.880054   77394 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:27:19.882098   77394 out.go:177] * Restarting existing kvm2 VM for "no-preload-888056" ...
	I0729 18:27:18.388470   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.388971   78080 main.go:141] libmachine: (old-k8s-version-386663) Found IP for machine: 192.168.50.70
	I0729 18:27:18.388989   78080 main.go:141] libmachine: (old-k8s-version-386663) Reserving static IP address...
	I0729 18:27:18.388999   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has current primary IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.389431   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "old-k8s-version-386663", mac: "52:54:00:78:b6:ac", ip: "192.168.50.70"} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.389459   78080 main.go:141] libmachine: (old-k8s-version-386663) Reserved static IP address: 192.168.50.70
	I0729 18:27:18.389477   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | skip adding static IP to network mk-old-k8s-version-386663 - found existing host DHCP lease matching {name: "old-k8s-version-386663", mac: "52:54:00:78:b6:ac", ip: "192.168.50.70"}
	I0729 18:27:18.389493   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Getting to WaitForSSH function...
	I0729 18:27:18.389515   78080 main.go:141] libmachine: (old-k8s-version-386663) Waiting for SSH to be available...
	I0729 18:27:18.391523   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.391916   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.391941   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.392062   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using SSH client type: external
	I0729 18:27:18.392088   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa (-rw-------)
	I0729 18:27:18.392119   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:27:18.392134   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | About to run SSH command:
	I0729 18:27:18.392150   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | exit 0
	I0729 18:27:18.514735   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | SSH cmd err, output: <nil>: 
	I0729 18:27:18.515114   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetConfigRaw
	I0729 18:27:18.515736   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:18.518194   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.518615   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.518651   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.518879   78080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json ...
	I0729 18:27:18.519090   78080 machine.go:94] provisionDockerMachine start ...
	I0729 18:27:18.519113   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:18.519322   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.521434   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.521824   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.521846   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.521996   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.522181   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.522349   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.522514   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.522724   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.522960   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.522975   78080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:27:18.622960   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:27:18.622989   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.623249   78080 buildroot.go:166] provisioning hostname "old-k8s-version-386663"
	I0729 18:27:18.623277   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.623461   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.626009   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.626376   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.626406   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.626649   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.626876   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.627141   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.627301   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.627474   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.627669   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.627683   78080 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386663 && echo "old-k8s-version-386663" | sudo tee /etc/hostname
	I0729 18:27:18.748137   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386663
	
	I0729 18:27:18.748165   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.751546   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.751882   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.751916   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.752086   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.752270   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.752409   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.752550   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.752747   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.753004   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.753031   78080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386663/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:27:18.863358   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:27:18.863389   78080 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:27:18.863415   78080 buildroot.go:174] setting up certificates
	I0729 18:27:18.863425   78080 provision.go:84] configureAuth start
	I0729 18:27:18.863436   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.863754   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:18.866285   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.866641   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.866668   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.866797   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.868886   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.869241   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.869270   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.869404   78080 provision.go:143] copyHostCerts
	I0729 18:27:18.869459   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:27:18.869468   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:27:18.869522   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:27:18.869614   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:27:18.869624   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:27:18.869652   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:27:18.869740   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:27:18.869750   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:27:18.869772   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:27:18.869833   78080 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386663 san=[127.0.0.1 192.168.50.70 localhost minikube old-k8s-version-386663]
	I0729 18:27:19.142743   78080 provision.go:177] copyRemoteCerts
	I0729 18:27:19.142808   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:27:19.142842   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.145484   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.145843   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.145872   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.146092   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.146334   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.146532   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.146692   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.230725   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:27:19.255862   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 18:27:19.290922   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:27:19.317519   78080 provision.go:87] duration metric: took 454.081583ms to configureAuth
	I0729 18:27:19.317549   78080 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:27:19.317766   78080 config.go:182] Loaded profile config "old-k8s-version-386663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:27:19.317854   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.320636   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.321074   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.321110   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.321346   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.321603   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.321782   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.321959   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.322158   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:19.322336   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:19.322351   78080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:27:19.626713   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:27:19.626737   78080 machine.go:97] duration metric: took 1.107631867s to provisionDockerMachine
	I0729 18:27:19.626749   78080 start.go:293] postStartSetup for "old-k8s-version-386663" (driver="kvm2")
	I0729 18:27:19.626763   78080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:27:19.626834   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.627168   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:27:19.627197   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.629389   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.629751   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.629782   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.629907   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.630102   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.630302   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.630460   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.709702   78080 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:27:19.713879   78080 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:27:19.713913   78080 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:27:19.713994   78080 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:27:19.714093   78080 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:27:19.714215   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:27:19.725226   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:19.751727   78080 start.go:296] duration metric: took 124.964072ms for postStartSetup
	I0729 18:27:19.751767   78080 fix.go:56] duration metric: took 19.951972224s for fixHost
	I0729 18:27:19.751796   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.754481   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.754843   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.754877   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.755107   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.755321   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.755482   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.755663   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.755829   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:19.756012   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:19.756024   78080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:27:19.859279   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277639.831700968
	
	I0729 18:27:19.859302   78080 fix.go:216] guest clock: 1722277639.831700968
	I0729 18:27:19.859309   78080 fix.go:229] Guest: 2024-07-29 18:27:19.831700968 +0000 UTC Remote: 2024-07-29 18:27:19.751770935 +0000 UTC m=+272.565043390 (delta=79.930033ms)
	I0729 18:27:19.859327   78080 fix.go:200] guest clock delta is within tolerance: 79.930033ms
	I0729 18:27:19.859332   78080 start.go:83] releasing machines lock for "old-k8s-version-386663", held for 20.059569122s
	I0729 18:27:19.859353   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.859661   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:19.862741   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.863225   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.863261   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.863449   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864092   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864309   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864392   78080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:27:19.864432   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.864547   78080 ssh_runner.go:195] Run: cat /version.json
	I0729 18:27:19.864572   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.867636   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.867798   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868019   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.868044   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868178   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.868330   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.868356   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868360   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.868500   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.868587   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.868667   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.868754   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.868910   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.869046   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.947441   78080 ssh_runner.go:195] Run: systemctl --version
	I0729 18:27:19.967868   78080 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:20.114336   78080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:20.121716   78080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:20.121793   78080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:20.143272   78080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:20.143298   78080 start.go:495] detecting cgroup driver to use...
	I0729 18:27:20.143385   78080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:20.162433   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:20.178310   78080 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:20.178397   78080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:20.194091   78080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:20.209796   78080 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:20.341466   78080 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:20.514215   78080 docker.go:233] disabling docker service ...
	I0729 18:27:20.514338   78080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:20.531018   78080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:20.551839   78080 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:20.680430   78080 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:20.834782   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:20.852454   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:20.874962   78080 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 18:27:20.875017   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.886550   78080 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:20.886619   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.899344   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.914254   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.927308   78080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:20.939807   78080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:20.951648   78080 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:20.951738   78080 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:20.967918   78080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:20.979872   78080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:21.125398   78080 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:21.290736   78080 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:21.290816   78080 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:21.296922   78080 start.go:563] Will wait 60s for crictl version
	I0729 18:27:21.296987   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:21.302200   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:21.350783   78080 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:21.350919   78080 ssh_runner.go:195] Run: crio --version
	I0729 18:27:21.391539   78080 ssh_runner.go:195] Run: crio --version
	I0729 18:27:21.441225   78080 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 18:27:21.442583   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:21.446238   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:21.446728   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:21.446756   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:21.446988   78080 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:21.452537   78080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:21.470394   78080 kubeadm.go:883] updating cluster {Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:21.470555   78080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:27:21.470610   78080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:21.531670   78080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:27:21.531742   78080 ssh_runner.go:195] Run: which lz4
	I0729 18:27:21.536436   78080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:27:21.542100   78080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:27:21.542139   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 18:27:19.883514   77394 main.go:141] libmachine: (no-preload-888056) Calling .Start
	I0729 18:27:19.883693   77394 main.go:141] libmachine: (no-preload-888056) Ensuring networks are active...
	I0729 18:27:19.884447   77394 main.go:141] libmachine: (no-preload-888056) Ensuring network default is active
	I0729 18:27:19.884847   77394 main.go:141] libmachine: (no-preload-888056) Ensuring network mk-no-preload-888056 is active
	I0729 18:27:19.885240   77394 main.go:141] libmachine: (no-preload-888056) Getting domain xml...
	I0729 18:27:19.886133   77394 main.go:141] libmachine: (no-preload-888056) Creating domain...
	I0729 18:27:21.226599   77394 main.go:141] libmachine: (no-preload-888056) Waiting to get IP...
	I0729 18:27:21.227673   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.228215   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.228278   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.228178   79288 retry.go:31] will retry after 290.676407ms: waiting for machine to come up
	I0729 18:27:21.520818   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.521458   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.521480   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.521360   79288 retry.go:31] will retry after 266.145355ms: waiting for machine to come up
	I0729 18:27:21.789603   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.790170   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.790200   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.790137   79288 retry.go:31] will retry after 464.137123ms: waiting for machine to come up
	I0729 18:27:22.255586   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:22.256159   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:22.256184   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:22.256098   79288 retry.go:31] will retry after 562.330595ms: waiting for machine to come up
	I0729 18:27:21.691280   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:23.188725   77859 node_ready.go:49] node "default-k8s-diff-port-502055" has status "Ready":"True"
	I0729 18:27:23.188758   77859 node_ready.go:38] duration metric: took 7.507549954s for node "default-k8s-diff-port-502055" to be "Ready" ...
	I0729 18:27:23.188772   77859 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:23.197714   77859 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.204037   77859 pod_ready.go:92] pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:23.204065   77859 pod_ready.go:81] duration metric: took 6.32123ms for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.204086   77859 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.211765   77859 pod_ready.go:92] pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:23.211791   77859 pod_ready.go:81] duration metric: took 7.69614ms for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.211803   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:21.544757   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:24.043649   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:23.329902   78080 crio.go:462] duration metric: took 1.793505279s to copy over tarball
	I0729 18:27:23.329979   78080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:27:26.453768   78080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.123735537s)
	I0729 18:27:26.453800   78080 crio.go:469] duration metric: took 3.123869338s to extract the tarball
	I0729 18:27:26.453809   78080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:27:26.501748   78080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:26.538093   78080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:27:26.538124   78080 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:27:26.538226   78080 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.538297   78080 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 18:27:26.538387   78080 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.538232   78080 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.538441   78080 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.538303   78080 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.538277   78080 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.538783   78080 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.540806   78080 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.540823   78080 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.540847   78080 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.540858   78080 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 18:27:26.540806   78080 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.540894   78080 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.540937   78080 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.540987   78080 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.700993   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.704402   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.712647   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.714034   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.715935   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.753888   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.758588   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 18:27:26.837981   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.844473   78080 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 18:27:26.844532   78080 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.844578   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.877082   78080 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 18:27:26.877134   78080 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.877183   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.889792   78080 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 18:27:26.889887   78080 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.889842   78080 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 18:27:26.889944   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.889983   78080 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.890034   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.916338   78080 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 18:27:26.916388   78080 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.916440   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.916437   78080 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 18:27:26.916540   78080 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.916581   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.942747   78080 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 18:27:26.942794   78080 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 18:27:26.942839   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:27.056976   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:27.056976   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 18:27:27.057045   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:27.057071   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:27.057101   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:27.057152   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:27.057178   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 18:27:27.219396   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 18:27:22.820490   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:22.820969   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:22.820993   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:22.820906   79288 retry.go:31] will retry after 728.452145ms: waiting for machine to come up
	I0729 18:27:23.550655   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:23.551337   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:23.551361   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:23.551287   79288 retry.go:31] will retry after 782.583051ms: waiting for machine to come up
	I0729 18:27:24.335785   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:24.336257   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:24.336310   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:24.336235   79288 retry.go:31] will retry after 1.040109521s: waiting for machine to come up
	I0729 18:27:25.377676   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:25.378187   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:25.378231   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:25.378153   79288 retry.go:31] will retry after 1.276093038s: waiting for machine to come up
	I0729 18:27:26.655479   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:26.655922   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:26.655950   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:26.655872   79288 retry.go:31] will retry after 1.267687539s: waiting for machine to come up
	I0729 18:27:25.219175   77859 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.225735   77859 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.718741   77859 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.718772   77859 pod_ready.go:81] duration metric: took 4.506959705s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.718786   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.723687   77859 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.723709   77859 pod_ready.go:81] duration metric: took 4.915901ms for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.723720   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.728504   77859 pod_ready.go:92] pod "kube-proxy-cgdm8" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.728526   77859 pod_ready.go:81] duration metric: took 4.797185ms for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.728538   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.733036   77859 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.733061   77859 pod_ready.go:81] duration metric: took 4.514471ms for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.733073   77859 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:29.739966   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:26.044607   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:28.543664   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.219541   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 18:27:27.223329   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 18:27:27.223406   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 18:27:27.223450   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 18:27:27.223492   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 18:27:27.223536   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 18:27:27.223567   78080 cache_images.go:92] duration metric: took 685.427642ms to LoadCachedImages
	W0729 18:27:27.223653   78080 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0729 18:27:27.223672   78080 kubeadm.go:934] updating node { 192.168.50.70 8443 v1.20.0 crio true true} ...
	I0729 18:27:27.223785   78080 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386663 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:27.223866   78080 ssh_runner.go:195] Run: crio config
	I0729 18:27:27.273186   78080 cni.go:84] Creating CNI manager for ""
	I0729 18:27:27.273207   78080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:27.273217   78080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:27.273241   78080 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.70 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386663 NodeName:old-k8s-version-386663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 18:27:27.273424   78080 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386663"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:27.273498   78080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 18:27:27.285247   78080 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:27.285327   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:27.295747   78080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 18:27:27.314192   78080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:27:27.331654   78080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 18:27:27.351717   78080 ssh_runner.go:195] Run: grep 192.168.50.70	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:27.356205   78080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:27.370446   78080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:27.509250   78080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:27.528776   78080 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663 for IP: 192.168.50.70
	I0729 18:27:27.528804   78080 certs.go:194] generating shared ca certs ...
	I0729 18:27:27.528823   78080 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:27.528991   78080 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:27.529045   78080 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:27.529061   78080 certs.go:256] generating profile certs ...
	I0729 18:27:27.529194   78080 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/client.key
	I0729 18:27:27.529308   78080 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key.71ea3f9f
	I0729 18:27:27.529364   78080 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key
	I0729 18:27:27.529529   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:27.529569   78080 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:27.529584   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:27.529614   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:27.529645   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:27.529689   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:27.529751   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:27.530573   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:27.582122   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:27.626846   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:27.663609   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:27.700294   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 18:27:27.746614   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:27.785212   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:27.834479   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:27:27.866939   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:27.892613   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:27.919059   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:27.947557   78080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:27.968625   78080 ssh_runner.go:195] Run: openssl version
	I0729 18:27:27.976500   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:27.991016   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:27.996228   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:27.996285   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:28.002529   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:28.013844   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:28.025388   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.029982   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.030042   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.036362   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:28.050134   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:28.062742   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.067240   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.067293   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.072973   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:28.084143   78080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:28.089526   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:28.096556   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:28.103044   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:28.109337   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:28.115455   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:28.121449   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:28.127395   78080 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:28.127504   78080 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:28.127581   78080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:28.176772   78080 cri.go:89] found id: ""
	I0729 18:27:28.176837   78080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:28.187955   78080 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:28.187979   78080 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:28.188034   78080 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:28.197926   78080 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:28.199364   78080 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-386663" does not appear in /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:27:28.200382   78080 kubeconfig.go:62] /home/jenkins/minikube-integration/19345-11206/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-386663" cluster setting kubeconfig missing "old-k8s-version-386663" context setting]
	I0729 18:27:28.201737   78080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:28.287712   78080 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:28.300675   78080 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.70
	I0729 18:27:28.300716   78080 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:28.300728   78080 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:28.300795   78080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:28.343880   78080 cri.go:89] found id: ""
	I0729 18:27:28.343962   78080 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:28.362391   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:28.372805   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:28.372830   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:28.372882   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:27:28.383540   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:28.383629   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:28.396564   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:27:28.409151   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:28.409208   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:28.422243   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:27:28.434736   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:28.434839   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:28.447681   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:27:28.460008   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:28.460073   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:28.472647   78080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:28.484179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:28.634526   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.206575   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.449626   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.550859   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.681945   78080 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:29.682015   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:30.182098   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:30.682977   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:31.182152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:31.682468   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:32.183031   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:27.924957   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:27.925430   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:27.925461   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:27.925378   79288 retry.go:31] will retry after 1.455979038s: waiting for machine to come up
	I0729 18:27:29.383257   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:29.383769   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:29.383793   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:29.383722   79288 retry.go:31] will retry after 1.862834258s: waiting for machine to come up
	I0729 18:27:31.248806   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:31.249394   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:31.249414   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:31.249344   79288 retry.go:31] will retry after 3.203097967s: waiting for machine to come up
	I0729 18:27:32.242350   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:34.738663   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:31.043735   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:33.543152   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:32.682567   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:33.182100   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:33.682494   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.183075   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.683115   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:35.183094   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:35.683092   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:36.182173   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:36.682843   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:37.182324   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.453552   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:34.453906   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:34.453930   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:34.453852   79288 retry.go:31] will retry after 3.166208105s: waiting for machine to come up
	I0729 18:27:36.739239   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:38.740812   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:35.543428   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:38.042603   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:37.622330   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.622738   77394 main.go:141] libmachine: (no-preload-888056) Found IP for machine: 192.168.72.80
	I0729 18:27:37.622767   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has current primary IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.622779   77394 main.go:141] libmachine: (no-preload-888056) Reserving static IP address...
	I0729 18:27:37.623108   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "no-preload-888056", mac: "52:54:00:b2:b0:1a", ip: "192.168.72.80"} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.623144   77394 main.go:141] libmachine: (no-preload-888056) DBG | skip adding static IP to network mk-no-preload-888056 - found existing host DHCP lease matching {name: "no-preload-888056", mac: "52:54:00:b2:b0:1a", ip: "192.168.72.80"}
	I0729 18:27:37.623160   77394 main.go:141] libmachine: (no-preload-888056) Reserved static IP address: 192.168.72.80
	I0729 18:27:37.623174   77394 main.go:141] libmachine: (no-preload-888056) Waiting for SSH to be available...
	I0729 18:27:37.623183   77394 main.go:141] libmachine: (no-preload-888056) DBG | Getting to WaitForSSH function...
	I0729 18:27:37.625391   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.625732   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.625759   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.625927   77394 main.go:141] libmachine: (no-preload-888056) DBG | Using SSH client type: external
	I0729 18:27:37.625948   77394 main.go:141] libmachine: (no-preload-888056) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa (-rw-------)
	I0729 18:27:37.625994   77394 main.go:141] libmachine: (no-preload-888056) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:27:37.626008   77394 main.go:141] libmachine: (no-preload-888056) DBG | About to run SSH command:
	I0729 18:27:37.626020   77394 main.go:141] libmachine: (no-preload-888056) DBG | exit 0
	I0729 18:27:37.750587   77394 main.go:141] libmachine: (no-preload-888056) DBG | SSH cmd err, output: <nil>: 
	I0729 18:27:37.750986   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetConfigRaw
	I0729 18:27:37.751717   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:37.754387   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.754753   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.754781   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.754995   77394 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/config.json ...
	I0729 18:27:37.755184   77394 machine.go:94] provisionDockerMachine start ...
	I0729 18:27:37.755207   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:37.755397   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.757649   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.757965   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.757988   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.758128   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.758297   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.758463   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.758599   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.758754   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.758918   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.758927   77394 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:27:37.862940   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:27:37.862976   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:37.863205   77394 buildroot.go:166] provisioning hostname "no-preload-888056"
	I0729 18:27:37.863234   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:37.863425   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.866190   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.866538   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.866565   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.866705   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.866878   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.867046   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.867166   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.867307   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.867478   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.867490   77394 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-888056 && echo "no-preload-888056" | sudo tee /etc/hostname
	I0729 18:27:37.985031   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-888056
	
	I0729 18:27:37.985070   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.987577   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.987917   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.987945   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.988126   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.988311   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.988469   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.988601   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.988786   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.988994   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.989012   77394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-888056' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-888056/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-888056' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:27:38.103831   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:27:38.103853   77394 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:27:38.103870   77394 buildroot.go:174] setting up certificates
	I0729 18:27:38.103878   77394 provision.go:84] configureAuth start
	I0729 18:27:38.103886   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:38.104166   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:38.107080   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.107493   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.107521   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.107690   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.110087   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.110495   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.110520   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.110738   77394 provision.go:143] copyHostCerts
	I0729 18:27:38.110793   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:27:38.110802   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:27:38.110853   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:27:38.110968   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:27:38.110978   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:27:38.110998   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:27:38.111056   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:27:38.111063   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:27:38.111080   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:27:38.111149   77394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.no-preload-888056 san=[127.0.0.1 192.168.72.80 localhost minikube no-preload-888056]
	I0729 18:27:38.327305   77394 provision.go:177] copyRemoteCerts
	I0729 18:27:38.327378   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:27:38.327407   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.330008   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.330304   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.330327   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.330516   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.330739   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.330908   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.331071   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:38.414678   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:27:38.443418   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:27:38.469248   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 18:27:38.494014   77394 provision.go:87] duration metric: took 390.106553ms to configureAuth
	I0729 18:27:38.494049   77394 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:27:38.494245   77394 config.go:182] Loaded profile config "no-preload-888056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:27:38.494357   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.497162   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.497586   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.497620   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.497946   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.498137   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.498328   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.498566   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.498766   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:38.498940   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:38.498955   77394 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:27:38.762438   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:27:38.762462   77394 machine.go:97] duration metric: took 1.007266999s to provisionDockerMachine
	I0729 18:27:38.762473   77394 start.go:293] postStartSetup for "no-preload-888056" (driver="kvm2")
	I0729 18:27:38.762484   77394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:27:38.762511   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:38.762797   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:27:38.762832   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.765677   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.766031   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.766054   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.766222   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.766432   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.766621   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.766774   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:38.854492   77394 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:27:38.858934   77394 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:27:38.858962   77394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:27:38.859041   77394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:27:38.859136   77394 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:27:38.859251   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:27:38.869459   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:38.894422   77394 start.go:296] duration metric: took 131.935433ms for postStartSetup
	I0729 18:27:38.894466   77394 fix.go:56] duration metric: took 19.034987866s for fixHost
	I0729 18:27:38.894492   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.897266   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.897654   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.897684   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.897890   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.898102   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.898250   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.898356   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.898547   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:38.898721   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:38.898732   77394 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:27:39.003526   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277658.970659996
	
	I0729 18:27:39.003571   77394 fix.go:216] guest clock: 1722277658.970659996
	I0729 18:27:39.003581   77394 fix.go:229] Guest: 2024-07-29 18:27:38.970659996 +0000 UTC Remote: 2024-07-29 18:27:38.8944731 +0000 UTC m=+356.533366653 (delta=76.186896ms)
	I0729 18:27:39.003600   77394 fix.go:200] guest clock delta is within tolerance: 76.186896ms
	I0729 18:27:39.003605   77394 start.go:83] releasing machines lock for "no-preload-888056", held for 19.144159359s
	I0729 18:27:39.003622   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.003881   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:39.006550   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.006850   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.006886   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.007005   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007597   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007779   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007879   77394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:27:39.007939   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:39.008001   77394 ssh_runner.go:195] Run: cat /version.json
	I0729 18:27:39.008026   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:39.010634   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.010941   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.010965   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.010984   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.011257   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:39.011442   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.011474   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.011487   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:39.011632   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:39.011678   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:39.011782   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:39.011951   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:39.011985   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:39.012094   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:39.114446   77394 ssh_runner.go:195] Run: systemctl --version
	I0729 18:27:39.120848   77394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:39.266976   77394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:39.273603   77394 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:39.273670   77394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:39.295511   77394 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:39.295533   77394 start.go:495] detecting cgroup driver to use...
	I0729 18:27:39.295593   77394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:39.313692   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:39.328435   77394 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:39.328502   77394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:39.342580   77394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:39.356694   77394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:39.474555   77394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:39.632766   77394 docker.go:233] disabling docker service ...
	I0729 18:27:39.632827   77394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:39.648961   77394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:39.663277   77394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:39.813329   77394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:39.944017   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:39.957624   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:39.976348   77394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 18:27:39.976401   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:39.986672   77394 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:39.986735   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:39.996867   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.007547   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.018141   77394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:40.029258   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.040007   77394 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.057611   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.068107   77394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:40.077798   77394 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:40.077877   77394 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:40.091040   77394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:40.100846   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:40.227049   77394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:40.368213   77394 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:40.368295   77394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:40.374168   77394 start.go:563] Will wait 60s for crictl version
	I0729 18:27:40.374239   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.378268   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:40.422500   77394 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:40.422579   77394 ssh_runner.go:195] Run: crio --version
	I0729 18:27:40.451170   77394 ssh_runner.go:195] Run: crio --version
	I0729 18:27:40.481789   77394 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 18:27:37.682180   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:38.182453   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:38.682639   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:39.182874   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:39.682496   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.182727   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.683073   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:41.182060   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:41.682421   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:42.182813   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.483209   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:40.486303   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:40.486738   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:40.486768   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:40.487032   77394 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:40.491318   77394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:40.505196   77394 kubeadm.go:883] updating cluster {Name:no-preload-888056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:40.505303   77394 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 18:27:40.505333   77394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:40.541356   77394 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 18:27:40.541380   77394 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:27:40.541445   77394 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.541452   77394 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.541465   77394 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.541495   77394 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.541503   77394 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.541527   77394 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.541583   77394 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.542060   77394 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 18:27:40.543507   77394 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.543519   77394 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.543505   77394 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 18:27:40.543535   77394 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.543504   77394 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.543761   77394 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.543799   77394 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.543999   77394 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.693026   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.709057   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.715664   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.720337   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.746126   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 18:27:40.748805   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.759200   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.768613   77394 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 18:27:40.768659   77394 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.768705   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.812940   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.852143   77394 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 18:27:40.852173   77394 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 18:27:40.852191   77394 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.852206   77394 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.852237   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.852249   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.890477   77394 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 18:27:40.890521   77394 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.890566   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991390   77394 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 18:27:40.991435   77394 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.991462   77394 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 18:27:40.991486   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991501   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.991508   77394 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.991548   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991556   77394 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 18:27:40.991579   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.991595   77394 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.991609   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.991654   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991694   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:41.087626   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:41.087736   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:41.087742   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:41.087782   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:41.087819   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 18:27:41.087830   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:41.087883   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.091774   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 18:27:41.091828   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:41.091858   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:41.091873   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:41.104679   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 18:27:41.104702   77394 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.104733   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 18:27:41.104750   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.155992   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 18:27:41.156114   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 18:27:41.156227   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:41.169410   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 18:27:41.169535   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:41.176103   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:41.176116   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 18:27:41.176214   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:41.241044   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:43.739887   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:40.543004   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:43.044338   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:42.682911   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:43.182279   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:43.682506   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.182109   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.682593   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:45.183002   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:45.682275   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:46.182491   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:46.683027   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:47.182311   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.874768   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.769989933s)
	I0729 18:27:44.874798   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 18:27:44.874827   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:44.874861   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (3.71860957s)
	I0729 18:27:44.874894   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 18:27:44.874906   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:44.874930   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.705380577s)
	I0729 18:27:44.874947   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 18:27:44.874972   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (3.698734733s)
	I0729 18:27:44.875001   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 18:27:46.333065   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.458135446s)
	I0729 18:27:46.333109   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 18:27:46.333137   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:46.333175   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:45.739935   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.740654   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:45.542272   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.543683   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.682979   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.183024   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.682708   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:49.182427   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:49.682335   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:50.182146   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:50.682716   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:51.182231   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:51.683106   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:52.182739   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.194389   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.861190748s)
	I0729 18:27:48.194419   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 18:27:48.194443   77394 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:48.194483   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:50.159353   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.964849018s)
	I0729 18:27:50.159384   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 18:27:50.159427   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:50.159494   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:52.256998   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.097482067s)
	I0729 18:27:52.257038   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 18:27:52.257075   77394 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:52.257125   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:50.239878   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.740167   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:50.042299   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.042567   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:54.043462   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.682628   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:53.182081   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:53.682919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:54.183194   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:54.682506   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:55.182992   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:55.682152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:56.183083   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:56.682897   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.182789   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:52.899503   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 18:27:52.899539   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:52.899594   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:54.868011   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.968389841s)
	I0729 18:27:54.868043   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 18:27:54.868075   77394 cache_images.go:123] Successfully loaded all cached images
	I0729 18:27:54.868080   77394 cache_images.go:92] duration metric: took 14.326689217s to LoadCachedImages
	I0729 18:27:54.868088   77394 kubeadm.go:934] updating node { 192.168.72.80 8443 v1.31.0-beta.0 crio true true} ...
	I0729 18:27:54.868226   77394 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-888056 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:54.868305   77394 ssh_runner.go:195] Run: crio config
	I0729 18:27:54.928569   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:27:54.928591   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:54.928604   77394 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:54.928633   77394 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.80 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-888056 NodeName:no-preload-888056 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:27:54.928800   77394 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-888056"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:54.928871   77394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 18:27:54.939479   77394 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:54.939534   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:54.948928   77394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0729 18:27:54.966700   77394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 18:27:54.984218   77394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0729 18:27:55.000813   77394 ssh_runner.go:195] Run: grep 192.168.72.80	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:55.004529   77394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:55.016140   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:55.141053   77394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:55.158874   77394 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056 for IP: 192.168.72.80
	I0729 18:27:55.158897   77394 certs.go:194] generating shared ca certs ...
	I0729 18:27:55.158918   77394 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:55.159074   77394 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:55.159136   77394 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:55.159150   77394 certs.go:256] generating profile certs ...
	I0729 18:27:55.159245   77394 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/client.key
	I0729 18:27:55.159320   77394 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.key.f09a151f
	I0729 18:27:55.159373   77394 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.key
	I0729 18:27:55.159511   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:55.159552   77394 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:55.159566   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:55.159600   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:55.159641   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:55.159680   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:55.159734   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:55.160575   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:55.211823   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:55.248637   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:55.287972   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:55.317920   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 18:27:55.346034   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:55.377569   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:55.402593   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 18:27:55.427969   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:55.452060   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:55.476635   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:55.500831   77394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:55.518744   77394 ssh_runner.go:195] Run: openssl version
	I0729 18:27:55.524865   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:55.536601   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.541752   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.541807   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.548070   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:55.559866   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:55.571833   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.576304   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.576342   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.582204   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:55.594531   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:55.605773   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.610585   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.610633   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.616478   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:55.628160   77394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:55.632691   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:55.638793   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:55.644678   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:55.651117   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:55.657397   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:55.663351   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:55.670080   77394 kubeadm.go:392] StartCluster: {Name:no-preload-888056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:55.670183   77394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:55.670248   77394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:55.712280   77394 cri.go:89] found id: ""
	I0729 18:27:55.712343   77394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:55.722878   77394 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:55.722898   77394 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:55.722935   77394 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:55.732704   77394 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:55.733646   77394 kubeconfig.go:125] found "no-preload-888056" server: "https://192.168.72.80:8443"
	I0729 18:27:55.736512   77394 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:55.748360   77394 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.80
	I0729 18:27:55.748403   77394 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:55.748416   77394 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:55.748464   77394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:55.789773   77394 cri.go:89] found id: ""
	I0729 18:27:55.789854   77394 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:55.808905   77394 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:55.819969   77394 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:55.819991   77394 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:55.820064   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:27:55.829392   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:55.829445   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:55.838934   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:27:55.848659   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:55.848720   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:55.859490   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:27:55.870024   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:55.870076   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:55.881599   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:27:55.891805   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:55.891869   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:55.901750   77394 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:55.911525   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:56.021031   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.075545   77394 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.054482988s)
	I0729 18:27:57.075571   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.302701   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.382837   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:55.261397   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:57.738688   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:59.739828   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:56.543870   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:59.043285   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:57.682237   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.182211   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.682456   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:59.182669   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:59.682863   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:00.182261   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:00.682993   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:01.182832   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:01.682899   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:02.182765   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.492480   77394 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:57.492580   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.993240   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.492965   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.517442   77394 api_server.go:72] duration metric: took 1.024961129s to wait for apiserver process to appear ...
	I0729 18:27:58.517479   77394 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:27:58.517505   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:27:58.518046   77394 api_server.go:269] stopped: https://192.168.72.80:8443/healthz: Get "https://192.168.72.80:8443/healthz": dial tcp 192.168.72.80:8443: connect: connection refused
	I0729 18:27:59.017614   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.088238   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:28:02.088265   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:28:02.088277   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.147855   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:28:02.147882   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:28:02.518439   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.525213   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:28:02.525247   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:28:03.018275   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:03.024993   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:28:03.025023   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:28:03.517564   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:03.523409   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 200:
	ok
	I0729 18:28:03.529656   77394 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 18:28:03.529687   77394 api_server.go:131] duration metric: took 5.01219984s to wait for apiserver health ...
	I0729 18:28:03.529698   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:28:03.529706   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:28:03.531527   77394 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:28:01.740935   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:03.743806   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:01.043882   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:03.542540   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:02.682331   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.182154   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.682499   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:04.182355   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:04.682338   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:05.182107   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:05.683125   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:06.182481   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:06.683153   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:07.182992   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.532788   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:28:03.544878   77394 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:28:03.586100   77394 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:28:03.604975   77394 system_pods.go:59] 8 kube-system pods found
	I0729 18:28:03.605012   77394 system_pods.go:61] "coredns-5cfdc65f69-bg5j4" [7a26ffbb-014c-4cf7-b302-214cf78374bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:28:03.605022   77394 system_pods.go:61] "etcd-no-preload-888056" [d76f2eb7-67d9-4ba0-8d2f-acfc78559651] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:28:03.605036   77394 system_pods.go:61] "kube-apiserver-no-preload-888056" [1dbea0ee-58be-47ca-b4ab-94065413768d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:28:03.605044   77394 system_pods.go:61] "kube-controller-manager-no-preload-888056" [fb8ce9d9-2953-4b91-8734-87bd38a63eb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:28:03.605051   77394 system_pods.go:61] "kube-proxy-w5z2f" [2425da76-cf2d-41c9-b8db-1370ab5333c5] Running
	I0729 18:28:03.605059   77394 system_pods.go:61] "kube-scheduler-no-preload-888056" [9958567f-116d-4094-9e7e-6208f7358486] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:28:03.605066   77394 system_pods.go:61] "metrics-server-78fcd8795b-jcdcw" [c506a5f8-d569-4c3d-9b6e-21b9fc63a86a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:28:03.605073   77394 system_pods.go:61] "storage-provisioner" [ccbc4fa6-1237-46ca-ac80-34972b9a43df] Running
	I0729 18:28:03.605082   77394 system_pods.go:74] duration metric: took 18.959807ms to wait for pod list to return data ...
	I0729 18:28:03.605095   77394 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:28:03.609225   77394 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:28:03.609249   77394 node_conditions.go:123] node cpu capacity is 2
	I0729 18:28:03.609261   77394 node_conditions.go:105] duration metric: took 4.16099ms to run NodePressure ...
	I0729 18:28:03.609278   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:28:03.881440   77394 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:28:03.886401   77394 kubeadm.go:739] kubelet initialised
	I0729 18:28:03.886429   77394 kubeadm.go:740] duration metric: took 4.958282ms waiting for restarted kubelet to initialise ...
	I0729 18:28:03.886440   77394 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:28:03.891373   77394 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:05.900595   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:06.239029   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:08.240309   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:06.042541   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:08.043322   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:07.682582   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.182094   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.682613   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:09.182936   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:09.682444   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:10.182354   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:10.682183   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:11.182502   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:11.682466   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:12.182113   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.397084   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.399546   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.897981   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:10.898006   77394 pod_ready.go:81] duration metric: took 7.006606905s for pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.898014   77394 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.903064   77394 pod_ready.go:92] pod "etcd-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:10.903088   77394 pod_ready.go:81] duration metric: took 5.066249ms for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.903099   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:11.409319   77394 pod_ready.go:92] pod "kube-apiserver-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:11.409344   77394 pod_ready.go:81] duration metric: took 506.238678ms for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:11.409353   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.250001   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:12.741099   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.542146   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:13.042422   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:12.682526   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.183014   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.682449   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:14.182138   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:14.683065   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:15.182838   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:15.682680   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:16.182714   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:16.682116   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:17.182842   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.415469   77394 pod_ready.go:102] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:13.917111   77394 pod_ready.go:92] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.917134   77394 pod_ready.go:81] duration metric: took 2.507774546s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.917149   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w5z2f" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.922045   77394 pod_ready.go:92] pod "kube-proxy-w5z2f" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.922069   77394 pod_ready.go:81] duration metric: took 4.912892ms for pod "kube-proxy-w5z2f" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.922080   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.927633   77394 pod_ready.go:92] pod "kube-scheduler-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.927654   77394 pod_ready.go:81] duration metric: took 5.565409ms for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.927666   77394 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:15.934081   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:15.240105   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.740031   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:19.740077   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:15.042540   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.043335   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:19.542061   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.683114   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:18.182919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:18.683103   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:19.182074   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:19.683031   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:20.182701   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:20.682749   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:21.182949   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:21.683001   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:22.182167   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:17.935797   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:20.434416   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:21.740735   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.238828   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:21.544060   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.042058   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:22.682723   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:23.182510   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:23.683084   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:24.182220   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:24.682699   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:25.182288   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:25.682433   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:26.182919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:26.682851   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:27.182225   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:22.435465   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.935088   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:26.239694   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:28.240174   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:26.542381   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:29.043706   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:27.682408   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:28.182187   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:28.683034   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:29.182922   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:29.682990   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:29.683063   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:29.730368   78080 cri.go:89] found id: ""
	I0729 18:28:29.730405   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.730413   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:29.730419   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:29.730473   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:29.770368   78080 cri.go:89] found id: ""
	I0729 18:28:29.770398   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.770409   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:29.770426   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:29.770479   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:29.809873   78080 cri.go:89] found id: ""
	I0729 18:28:29.809898   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.809906   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:29.809911   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:29.809970   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:29.848980   78080 cri.go:89] found id: ""
	I0729 18:28:29.849006   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.849016   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:29.849023   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:29.849082   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:29.887261   78080 cri.go:89] found id: ""
	I0729 18:28:29.887292   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.887302   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:29.887311   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:29.887388   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:29.927011   78080 cri.go:89] found id: ""
	I0729 18:28:29.927041   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.927051   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:29.927058   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:29.927122   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:29.965577   78080 cri.go:89] found id: ""
	I0729 18:28:29.965609   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.965619   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:29.965625   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:29.965693   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:29.999180   78080 cri.go:89] found id: ""
	I0729 18:28:29.999210   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.999222   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:29.999233   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:29.999253   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:30.049401   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:30.049433   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:30.063903   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:30.063939   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:30.194776   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:30.194797   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:30.194812   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:30.261861   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:30.261906   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:27.434837   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:29.435257   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:31.435297   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:30.738940   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:32.740748   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:31.542494   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:33.542872   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:32.801821   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:32.814741   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:32.814815   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:32.853490   78080 cri.go:89] found id: ""
	I0729 18:28:32.853514   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.853522   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:32.853530   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:32.853580   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:32.890314   78080 cri.go:89] found id: ""
	I0729 18:28:32.890339   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.890349   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:32.890356   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:32.890435   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:32.928231   78080 cri.go:89] found id: ""
	I0729 18:28:32.928255   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.928262   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:32.928268   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:32.928314   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:32.964024   78080 cri.go:89] found id: ""
	I0729 18:28:32.964054   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.964065   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:32.964072   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:32.964136   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:33.002099   78080 cri.go:89] found id: ""
	I0729 18:28:33.002127   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.002140   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:33.002146   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:33.002195   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:33.042238   78080 cri.go:89] found id: ""
	I0729 18:28:33.042265   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.042273   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:33.042278   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:33.042331   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:33.078715   78080 cri.go:89] found id: ""
	I0729 18:28:33.078741   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.078750   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:33.078756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:33.078816   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:33.123304   78080 cri.go:89] found id: ""
	I0729 18:28:33.123334   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.123342   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:33.123351   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:33.123366   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:33.198950   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:33.198994   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:33.223566   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:33.223594   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:33.306500   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:33.306526   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:33.306541   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:33.379386   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:33.379421   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:35.926834   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:35.942218   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:35.942296   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:35.980115   78080 cri.go:89] found id: ""
	I0729 18:28:35.980142   78080 logs.go:276] 0 containers: []
	W0729 18:28:35.980153   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:35.980159   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:35.980221   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:36.015354   78080 cri.go:89] found id: ""
	I0729 18:28:36.015379   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.015387   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:36.015392   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:36.015456   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:36.056411   78080 cri.go:89] found id: ""
	I0729 18:28:36.056435   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.056445   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:36.056451   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:36.056499   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:36.099153   78080 cri.go:89] found id: ""
	I0729 18:28:36.099180   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.099188   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:36.099193   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:36.099241   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:36.133427   78080 cri.go:89] found id: ""
	I0729 18:28:36.133459   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.133470   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:36.133477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:36.133544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:36.168619   78080 cri.go:89] found id: ""
	I0729 18:28:36.168646   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.168657   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:36.168664   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:36.168723   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:36.203636   78080 cri.go:89] found id: ""
	I0729 18:28:36.203666   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.203676   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:36.203684   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:36.203747   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:36.246495   78080 cri.go:89] found id: ""
	I0729 18:28:36.246523   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.246533   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:36.246544   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:36.246561   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:36.260630   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:36.260656   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:36.337406   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:36.337424   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:36.337435   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:36.410016   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:36.410049   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:36.453458   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:36.453492   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:33.435859   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.934955   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.240070   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:37.739406   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.740035   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.543153   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:37.543467   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.543573   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.004147   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:39.018217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:39.018279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:39.054130   78080 cri.go:89] found id: ""
	I0729 18:28:39.054155   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.054166   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:39.054172   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:39.054219   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:39.090458   78080 cri.go:89] found id: ""
	I0729 18:28:39.090482   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.090490   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:39.090501   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:39.090548   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:39.126933   78080 cri.go:89] found id: ""
	I0729 18:28:39.126960   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.126971   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:39.126978   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:39.127042   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:39.162324   78080 cri.go:89] found id: ""
	I0729 18:28:39.162352   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.162381   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:39.162389   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:39.162450   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:39.202440   78080 cri.go:89] found id: ""
	I0729 18:28:39.202464   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.202471   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:39.202477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:39.202537   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:39.238314   78080 cri.go:89] found id: ""
	I0729 18:28:39.238342   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.238352   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:39.238368   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:39.238436   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:39.275545   78080 cri.go:89] found id: ""
	I0729 18:28:39.275584   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.275592   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:39.275598   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:39.275663   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:39.311575   78080 cri.go:89] found id: ""
	I0729 18:28:39.311603   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.311614   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:39.311624   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:39.311643   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:39.367667   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:39.367711   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:39.381823   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:39.381852   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:39.456060   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:39.456083   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:39.456100   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:39.531747   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:39.531784   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:42.077771   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:42.092424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:42.092512   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:42.128710   78080 cri.go:89] found id: ""
	I0729 18:28:42.128744   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.128756   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:42.128765   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:42.128834   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:42.166092   78080 cri.go:89] found id: ""
	I0729 18:28:42.166126   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.166133   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:42.166138   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:42.166186   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:42.200955   78080 cri.go:89] found id: ""
	I0729 18:28:42.200981   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.200989   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:42.200994   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:42.201053   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:38.435476   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:40.935166   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:42.240354   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:44.739322   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:41.543640   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:43.543781   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:42.240176   78080 cri.go:89] found id: ""
	I0729 18:28:42.240203   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.240212   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:42.240219   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:42.240279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:42.279844   78080 cri.go:89] found id: ""
	I0729 18:28:42.279872   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.279880   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:42.279885   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:42.279946   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:42.313071   78080 cri.go:89] found id: ""
	I0729 18:28:42.313099   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.313108   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:42.313114   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:42.313187   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:42.348540   78080 cri.go:89] found id: ""
	I0729 18:28:42.348566   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.348573   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:42.348580   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:42.348630   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:42.384688   78080 cri.go:89] found id: ""
	I0729 18:28:42.384714   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.384725   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:42.384736   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:42.384750   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:42.399178   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:42.399206   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:42.472903   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:42.472921   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:42.472937   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:42.558541   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:42.558573   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:42.599403   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:42.599432   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:45.154026   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:45.167130   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:45.167200   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:45.203627   78080 cri.go:89] found id: ""
	I0729 18:28:45.203654   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.203663   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:45.203668   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:45.203714   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:45.242293   78080 cri.go:89] found id: ""
	I0729 18:28:45.242316   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.242325   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:45.242332   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:45.242403   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:45.282253   78080 cri.go:89] found id: ""
	I0729 18:28:45.282275   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.282282   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:45.282288   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:45.282335   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:45.320151   78080 cri.go:89] found id: ""
	I0729 18:28:45.320175   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.320183   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:45.320189   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:45.320250   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:45.356210   78080 cri.go:89] found id: ""
	I0729 18:28:45.356236   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.356247   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:45.356254   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:45.356316   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:45.393083   78080 cri.go:89] found id: ""
	I0729 18:28:45.393116   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.393131   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:45.393139   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:45.393199   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:45.430235   78080 cri.go:89] found id: ""
	I0729 18:28:45.430263   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.430274   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:45.430282   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:45.430346   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:45.463068   78080 cri.go:89] found id: ""
	I0729 18:28:45.463132   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.463143   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:45.463155   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:45.463203   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:45.541411   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:45.541441   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:45.581967   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:45.582001   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:45.639427   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:45.639459   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:45.655715   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:45.655741   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:45.725820   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:42.943815   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:45.435444   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:46.739873   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:49.240293   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:46.042576   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:48.042735   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:48.226252   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:48.240419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:48.240494   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:48.271506   78080 cri.go:89] found id: ""
	I0729 18:28:48.271538   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.271550   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:48.271557   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:48.271615   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:48.305163   78080 cri.go:89] found id: ""
	I0729 18:28:48.305186   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.305198   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:48.305203   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:48.305252   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:48.336453   78080 cri.go:89] found id: ""
	I0729 18:28:48.336480   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.336492   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:48.336500   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:48.336557   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:48.368690   78080 cri.go:89] found id: ""
	I0729 18:28:48.368713   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.368720   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:48.368725   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:48.368784   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:48.401723   78080 cri.go:89] found id: ""
	I0729 18:28:48.401746   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.401753   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:48.401758   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:48.401822   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:48.439876   78080 cri.go:89] found id: ""
	I0729 18:28:48.439896   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.439903   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:48.439908   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:48.439956   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:48.473352   78080 cri.go:89] found id: ""
	I0729 18:28:48.473383   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.473394   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:48.473401   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:48.473461   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:48.506752   78080 cri.go:89] found id: ""
	I0729 18:28:48.506779   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.506788   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:48.506799   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:48.506815   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:48.547513   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:48.547535   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:48.599704   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:48.599733   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:48.613577   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:48.613604   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:48.681272   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:48.681290   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:48.681301   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:51.267397   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:51.280243   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:51.280317   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:51.314047   78080 cri.go:89] found id: ""
	I0729 18:28:51.314078   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.314090   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:51.314097   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:51.314162   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:51.346048   78080 cri.go:89] found id: ""
	I0729 18:28:51.346073   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.346080   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:51.346085   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:51.346144   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:51.380511   78080 cri.go:89] found id: ""
	I0729 18:28:51.380543   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.380553   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:51.380561   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:51.380637   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:51.415189   78080 cri.go:89] found id: ""
	I0729 18:28:51.415213   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.415220   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:51.415227   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:51.415310   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:51.454324   78080 cri.go:89] found id: ""
	I0729 18:28:51.454351   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.454380   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:51.454388   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:51.454449   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:51.488737   78080 cri.go:89] found id: ""
	I0729 18:28:51.488768   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.488779   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:51.488787   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:51.488848   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:51.528869   78080 cri.go:89] found id: ""
	I0729 18:28:51.528903   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.528912   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:51.528920   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:51.528972   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:51.566039   78080 cri.go:89] found id: ""
	I0729 18:28:51.566067   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.566075   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:51.566086   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:51.566102   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:51.604746   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:51.604774   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:51.661048   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:51.661089   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:51.675420   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:51.675447   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:51.754496   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:51.754531   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:51.754548   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:47.934575   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:49.935187   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:51.247773   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:53.740386   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:50.043378   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:52.543104   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:54.335796   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:54.350726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:54.350784   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:54.389661   78080 cri.go:89] found id: ""
	I0729 18:28:54.389683   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.389694   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:54.389701   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:54.389761   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:54.427073   78080 cri.go:89] found id: ""
	I0729 18:28:54.427100   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.427110   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:54.427117   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:54.427178   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:54.466761   78080 cri.go:89] found id: ""
	I0729 18:28:54.466793   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.466802   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:54.466808   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:54.466871   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:54.501115   78080 cri.go:89] found id: ""
	I0729 18:28:54.501144   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.501159   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:54.501167   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:54.501229   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:54.535430   78080 cri.go:89] found id: ""
	I0729 18:28:54.535461   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.535472   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:54.535480   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:54.535543   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:54.574994   78080 cri.go:89] found id: ""
	I0729 18:28:54.575024   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.575034   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:54.575041   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:54.575107   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:54.608770   78080 cri.go:89] found id: ""
	I0729 18:28:54.608792   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.608800   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:54.608805   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:54.608850   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:54.648026   78080 cri.go:89] found id: ""
	I0729 18:28:54.648050   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.648057   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:54.648066   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:54.648077   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:54.728445   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:54.728485   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:54.774752   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:54.774781   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:54.826549   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:54.826582   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:54.840366   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:54.840394   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:54.907422   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:52.434956   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:54.436125   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:56.933929   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:56.239045   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:58.239967   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:55.041898   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:57.042968   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:59.542837   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:57.408469   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:57.421855   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:57.421923   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:57.457794   78080 cri.go:89] found id: ""
	I0729 18:28:57.457816   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.457824   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:57.457829   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:57.457908   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:57.492851   78080 cri.go:89] found id: ""
	I0729 18:28:57.492880   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.492888   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:57.492894   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:57.492946   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:57.528221   78080 cri.go:89] found id: ""
	I0729 18:28:57.528249   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.528258   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:57.528265   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:57.528330   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:57.565504   78080 cri.go:89] found id: ""
	I0729 18:28:57.565536   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.565547   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:57.565554   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:57.565618   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:57.599391   78080 cri.go:89] found id: ""
	I0729 18:28:57.599418   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.599426   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:57.599432   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:57.599491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:57.643757   78080 cri.go:89] found id: ""
	I0729 18:28:57.643784   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.643798   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:57.643806   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:57.643867   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:57.680825   78080 cri.go:89] found id: ""
	I0729 18:28:57.680853   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.680864   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:57.680871   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:57.680936   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:57.714450   78080 cri.go:89] found id: ""
	I0729 18:28:57.714479   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.714490   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:57.714500   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:57.714516   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:57.798411   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:57.798437   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:57.798453   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:57.878210   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:57.878246   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:57.917476   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:57.917505   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:57.971395   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:57.971432   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:00.486419   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:00.500625   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:00.500703   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:00.539625   78080 cri.go:89] found id: ""
	I0729 18:29:00.539650   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.539659   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:00.539682   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:00.539737   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:00.577252   78080 cri.go:89] found id: ""
	I0729 18:29:00.577284   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.577297   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:00.577303   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:00.577350   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:00.611850   78080 cri.go:89] found id: ""
	I0729 18:29:00.611878   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.611886   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:00.611892   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:00.611939   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:00.648964   78080 cri.go:89] found id: ""
	I0729 18:29:00.648989   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.648996   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:00.649003   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:00.649062   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:00.686124   78080 cri.go:89] found id: ""
	I0729 18:29:00.686147   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.686156   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:00.686161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:00.686217   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:00.721166   78080 cri.go:89] found id: ""
	I0729 18:29:00.721195   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.721205   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:00.721213   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:00.721276   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:00.758394   78080 cri.go:89] found id: ""
	I0729 18:29:00.758423   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.758431   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:00.758436   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:00.758491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:00.793487   78080 cri.go:89] found id: ""
	I0729 18:29:00.793514   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.793523   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:00.793533   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:00.793549   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:00.807069   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:00.807106   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:00.880611   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:00.880629   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:00.880641   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:00.963534   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:00.963568   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:01.004145   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:01.004174   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:58.933964   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:00.934221   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:00.739676   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:02.741020   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:02.042346   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:04.541902   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:03.560985   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:03.574407   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:03.574476   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:03.608027   78080 cri.go:89] found id: ""
	I0729 18:29:03.608048   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.608057   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:03.608062   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:03.608119   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:03.644777   78080 cri.go:89] found id: ""
	I0729 18:29:03.644804   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.644814   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:03.644821   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:03.644895   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:03.684050   78080 cri.go:89] found id: ""
	I0729 18:29:03.684074   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.684082   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:03.684089   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:03.684149   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:03.724350   78080 cri.go:89] found id: ""
	I0729 18:29:03.724376   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.724383   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:03.724390   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:03.724439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:03.766859   78080 cri.go:89] found id: ""
	I0729 18:29:03.766887   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.766898   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:03.766905   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:03.766967   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:03.800535   78080 cri.go:89] found id: ""
	I0729 18:29:03.800562   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.800572   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:03.800579   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:03.800639   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:03.834991   78080 cri.go:89] found id: ""
	I0729 18:29:03.835011   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.835019   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:03.835024   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:03.835073   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:03.869159   78080 cri.go:89] found id: ""
	I0729 18:29:03.869191   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.869201   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:03.869211   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:03.869226   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:03.940451   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:03.940469   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:03.940487   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:04.020880   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:04.020910   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:04.064707   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:04.064728   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:04.121551   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:04.121587   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:06.636983   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:06.651500   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:06.651582   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:06.686556   78080 cri.go:89] found id: ""
	I0729 18:29:06.686582   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.686592   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:06.686599   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:06.686660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:06.721967   78080 cri.go:89] found id: ""
	I0729 18:29:06.721996   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.722008   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:06.722016   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:06.722115   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:06.760409   78080 cri.go:89] found id: ""
	I0729 18:29:06.760433   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.760440   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:06.760445   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:06.760499   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:06.794050   78080 cri.go:89] found id: ""
	I0729 18:29:06.794074   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.794081   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:06.794087   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:06.794143   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:06.826445   78080 cri.go:89] found id: ""
	I0729 18:29:06.826471   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.826478   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:06.826484   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:06.826544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:06.860680   78080 cri.go:89] found id: ""
	I0729 18:29:06.860700   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.860706   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:06.860712   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:06.860761   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:06.898192   78080 cri.go:89] found id: ""
	I0729 18:29:06.898215   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.898223   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:06.898229   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:06.898284   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:06.931892   78080 cri.go:89] found id: ""
	I0729 18:29:06.931920   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.931930   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:06.931940   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:06.931955   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:06.987265   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:06.987294   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:07.043520   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:07.043547   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:07.056995   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:07.057019   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:07.124932   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:07.124956   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:07.124971   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:03.435778   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:05.936004   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:05.239352   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:07.239383   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:06.542526   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:08.543497   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:09.708947   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:09.723497   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:09.723565   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:09.762686   78080 cri.go:89] found id: ""
	I0729 18:29:09.762714   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.762725   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:09.762733   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:09.762797   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:09.799674   78080 cri.go:89] found id: ""
	I0729 18:29:09.799699   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.799708   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:09.799715   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:09.799775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:09.836121   78080 cri.go:89] found id: ""
	I0729 18:29:09.836147   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.836156   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:09.836161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:09.836209   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:09.872758   78080 cri.go:89] found id: ""
	I0729 18:29:09.872783   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.872791   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:09.872797   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:09.872842   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:09.911681   78080 cri.go:89] found id: ""
	I0729 18:29:09.911711   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.911719   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:09.911724   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:09.911773   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:09.951531   78080 cri.go:89] found id: ""
	I0729 18:29:09.951554   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.951561   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:09.951567   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:09.951624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:09.985568   78080 cri.go:89] found id: ""
	I0729 18:29:09.985597   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.985606   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:09.985612   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:09.985661   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:10.020369   78080 cri.go:89] found id: ""
	I0729 18:29:10.020394   78080 logs.go:276] 0 containers: []
	W0729 18:29:10.020402   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:10.020409   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:10.020421   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:10.076538   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:10.076574   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:10.090954   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:10.090980   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:10.165843   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:10.165875   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:10.165890   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:10.242438   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:10.242469   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:08.434575   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:10.934523   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:09.744446   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:12.239540   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:14.242060   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:10.544272   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:13.043064   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:12.781369   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:12.797066   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:12.797160   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:12.832500   78080 cri.go:89] found id: ""
	I0729 18:29:12.832528   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.832545   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:12.832552   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:12.832615   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:12.866390   78080 cri.go:89] found id: ""
	I0729 18:29:12.866420   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.866428   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:12.866434   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:12.866494   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:12.901616   78080 cri.go:89] found id: ""
	I0729 18:29:12.901636   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.901644   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:12.901649   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:12.901713   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:12.935954   78080 cri.go:89] found id: ""
	I0729 18:29:12.935976   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.935985   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:12.935993   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:12.936053   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:12.970570   78080 cri.go:89] found id: ""
	I0729 18:29:12.970623   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.970637   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:12.970645   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:12.970702   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:13.008629   78080 cri.go:89] found id: ""
	I0729 18:29:13.008658   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.008666   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:13.008672   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:13.008725   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:13.045689   78080 cri.go:89] found id: ""
	I0729 18:29:13.045713   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.045721   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:13.045726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:13.045773   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:13.084707   78080 cri.go:89] found id: ""
	I0729 18:29:13.084735   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.084745   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:13.084756   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:13.084774   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:13.161884   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:13.161920   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:13.205377   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:13.205410   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:13.258161   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:13.258189   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:13.272208   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:13.272240   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:13.347519   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:15.848068   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:15.861773   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:15.861851   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:15.902421   78080 cri.go:89] found id: ""
	I0729 18:29:15.902449   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.902458   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:15.902466   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:15.902532   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:15.939552   78080 cri.go:89] found id: ""
	I0729 18:29:15.939576   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.939583   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:15.939588   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:15.939645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:15.974424   78080 cri.go:89] found id: ""
	I0729 18:29:15.974454   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.974463   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:15.974468   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:15.974516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:16.010955   78080 cri.go:89] found id: ""
	I0729 18:29:16.010993   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.011000   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:16.011006   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:16.011062   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:16.046785   78080 cri.go:89] found id: ""
	I0729 18:29:16.046815   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.046825   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:16.046832   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:16.046887   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:16.082691   78080 cri.go:89] found id: ""
	I0729 18:29:16.082721   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.082731   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:16.082739   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:16.082796   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:16.127633   78080 cri.go:89] found id: ""
	I0729 18:29:16.127663   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.127676   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:16.127684   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:16.127741   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:16.162641   78080 cri.go:89] found id: ""
	I0729 18:29:16.162662   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.162670   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:16.162684   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:16.162695   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:16.215132   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:16.215162   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:16.229581   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:16.229607   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:16.303178   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:16.303198   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:16.303212   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:16.383739   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:16.383775   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:12.934751   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:14.934965   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:16.739047   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:18.739145   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:15.043163   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:17.544340   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:18.924292   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:18.937571   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:18.937626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:18.970523   78080 cri.go:89] found id: ""
	I0729 18:29:18.970554   78080 logs.go:276] 0 containers: []
	W0729 18:29:18.970563   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:18.970568   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:18.970624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:19.005448   78080 cri.go:89] found id: ""
	I0729 18:29:19.005471   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.005478   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:19.005483   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:19.005538   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:19.044352   78080 cri.go:89] found id: ""
	I0729 18:29:19.044377   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.044386   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:19.044393   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:19.044448   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:19.079288   78080 cri.go:89] found id: ""
	I0729 18:29:19.079317   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.079327   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:19.079333   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:19.079402   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:19.122932   78080 cri.go:89] found id: ""
	I0729 18:29:19.122954   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.122961   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:19.122967   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:19.123020   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:19.166992   78080 cri.go:89] found id: ""
	I0729 18:29:19.167018   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.167025   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:19.167031   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:19.167103   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:19.215301   78080 cri.go:89] found id: ""
	I0729 18:29:19.215331   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.215341   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:19.215355   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:19.215419   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:19.267635   78080 cri.go:89] found id: ""
	I0729 18:29:19.267657   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.267664   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:19.267671   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:19.267682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:19.319924   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:19.319962   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:19.333987   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:19.334010   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:19.406541   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:19.406558   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:19.406571   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:19.487388   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:19.487426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:22.027745   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:22.041145   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:22.041218   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:22.080000   78080 cri.go:89] found id: ""
	I0729 18:29:22.080022   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.080029   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:22.080034   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:22.080079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:22.116385   78080 cri.go:89] found id: ""
	I0729 18:29:22.116415   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.116425   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:22.116431   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:22.116492   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:22.150530   78080 cri.go:89] found id: ""
	I0729 18:29:22.150552   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.150559   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:22.150565   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:22.150621   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:22.188782   78080 cri.go:89] found id: ""
	I0729 18:29:22.188808   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.188817   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:22.188822   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:22.188873   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:17.434007   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:19.434864   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:21.935573   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:20.739852   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:23.239853   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:20.044010   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:22.542952   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:24.543614   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:22.227117   78080 cri.go:89] found id: ""
	I0729 18:29:22.227152   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.227162   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:22.227169   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:22.227234   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:22.263057   78080 cri.go:89] found id: ""
	I0729 18:29:22.263079   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.263086   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:22.263091   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:22.263145   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:22.297368   78080 cri.go:89] found id: ""
	I0729 18:29:22.297391   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.297399   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:22.297406   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:22.297466   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:22.334117   78080 cri.go:89] found id: ""
	I0729 18:29:22.334149   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.334159   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:22.334170   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:22.334184   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:22.349344   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:22.349369   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:22.415720   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:22.415743   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:22.415758   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:22.494937   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:22.494971   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:22.536352   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:22.536382   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:25.087795   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:25.103985   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:25.104050   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:25.158532   78080 cri.go:89] found id: ""
	I0729 18:29:25.158562   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.158572   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:25.158580   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:25.158641   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:25.216740   78080 cri.go:89] found id: ""
	I0729 18:29:25.216762   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.216769   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:25.216775   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:25.216827   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:25.254827   78080 cri.go:89] found id: ""
	I0729 18:29:25.254855   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.254865   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:25.254872   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:25.254934   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:25.289377   78080 cri.go:89] found id: ""
	I0729 18:29:25.289407   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.289417   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:25.289424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:25.289484   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:25.328111   78080 cri.go:89] found id: ""
	I0729 18:29:25.328144   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.328153   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:25.328161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:25.328224   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:25.364779   78080 cri.go:89] found id: ""
	I0729 18:29:25.364808   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.364815   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:25.364827   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:25.364874   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:25.402906   78080 cri.go:89] found id: ""
	I0729 18:29:25.402935   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.402942   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:25.402948   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:25.403007   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:25.438747   78080 cri.go:89] found id: ""
	I0729 18:29:25.438770   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.438778   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:25.438787   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:25.438803   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:25.452803   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:25.452829   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:25.527575   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:25.527593   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:25.527610   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:25.622437   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:25.622482   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:25.661451   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:25.661478   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:23.936249   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:26.434496   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:25.739358   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:27.739702   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:27.043125   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:29.542130   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:28.213898   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:28.230013   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:28.230071   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:28.265484   78080 cri.go:89] found id: ""
	I0729 18:29:28.265511   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.265521   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:28.265530   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:28.265594   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:28.306374   78080 cri.go:89] found id: ""
	I0729 18:29:28.306428   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.306441   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:28.306448   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:28.306501   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:28.340274   78080 cri.go:89] found id: ""
	I0729 18:29:28.340299   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.340309   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:28.340316   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:28.340379   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:28.373928   78080 cri.go:89] found id: ""
	I0729 18:29:28.373973   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.373982   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:28.373990   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:28.374052   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:28.407075   78080 cri.go:89] found id: ""
	I0729 18:29:28.407107   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.407120   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:28.407129   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:28.407215   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:28.444501   78080 cri.go:89] found id: ""
	I0729 18:29:28.444528   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.444536   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:28.444543   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:28.444614   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:28.487513   78080 cri.go:89] found id: ""
	I0729 18:29:28.487540   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.487548   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:28.487554   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:28.487611   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:28.521957   78080 cri.go:89] found id: ""
	I0729 18:29:28.521990   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.522000   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:28.522011   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:28.522027   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:28.536880   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:28.536918   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:28.609486   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:28.609513   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:28.609528   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:28.694086   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:28.694125   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:28.733930   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:28.733964   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:31.292260   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:31.305840   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:31.305899   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:31.342510   78080 cri.go:89] found id: ""
	I0729 18:29:31.342539   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.342550   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:31.342557   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:31.342613   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:31.375093   78080 cri.go:89] found id: ""
	I0729 18:29:31.375118   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.375128   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:31.375135   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:31.375198   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:31.408554   78080 cri.go:89] found id: ""
	I0729 18:29:31.408576   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.408583   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:31.408588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:31.408660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:31.448748   78080 cri.go:89] found id: ""
	I0729 18:29:31.448774   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.448783   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:31.448796   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:31.448855   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:31.483541   78080 cri.go:89] found id: ""
	I0729 18:29:31.483564   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.483572   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:31.483578   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:31.483637   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:31.518173   78080 cri.go:89] found id: ""
	I0729 18:29:31.518198   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.518209   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:31.518217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:31.518279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:31.553345   78080 cri.go:89] found id: ""
	I0729 18:29:31.553371   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.553379   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:31.553384   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:31.553439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:31.591857   78080 cri.go:89] found id: ""
	I0729 18:29:31.591887   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.591905   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:31.591916   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:31.591929   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:31.648404   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:31.648436   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:31.661455   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:31.661477   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:31.732978   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:31.732997   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:31.733009   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:31.812105   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:31.812145   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:28.435517   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:30.436822   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:30.239755   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:32.739231   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.739534   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:31.542847   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:33.543096   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.353079   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:34.366759   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:34.366817   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:34.400944   78080 cri.go:89] found id: ""
	I0729 18:29:34.400974   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.400984   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:34.400991   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:34.401055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:34.439348   78080 cri.go:89] found id: ""
	I0729 18:29:34.439373   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.439383   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:34.439395   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:34.439444   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:34.473969   78080 cri.go:89] found id: ""
	I0729 18:29:34.473991   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.474010   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:34.474017   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:34.474080   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:34.507741   78080 cri.go:89] found id: ""
	I0729 18:29:34.507770   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.507778   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:34.507784   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:34.507845   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:34.543794   78080 cri.go:89] found id: ""
	I0729 18:29:34.543815   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.543823   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:34.543830   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:34.543895   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:34.577893   78080 cri.go:89] found id: ""
	I0729 18:29:34.577918   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.577926   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:34.577931   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:34.577978   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:34.612703   78080 cri.go:89] found id: ""
	I0729 18:29:34.612735   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.612745   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:34.612752   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:34.612815   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:34.648167   78080 cri.go:89] found id: ""
	I0729 18:29:34.648197   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.648209   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:34.648219   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:34.648233   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:34.689821   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:34.689848   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:34.743902   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:34.743935   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:34.757400   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:34.757426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:34.833684   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:34.833706   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:34.833721   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:32.934207   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.936549   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:37.238618   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:39.239761   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:36.042461   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:38.543304   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:37.419270   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:37.433249   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:37.433301   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:37.469991   78080 cri.go:89] found id: ""
	I0729 18:29:37.470021   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.470031   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:37.470038   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:37.470098   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:37.504511   78080 cri.go:89] found id: ""
	I0729 18:29:37.504537   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.504548   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:37.504554   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:37.504612   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:37.545304   78080 cri.go:89] found id: ""
	I0729 18:29:37.545332   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.545342   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:37.545349   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:37.545406   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:37.584255   78080 cri.go:89] found id: ""
	I0729 18:29:37.584280   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.584287   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:37.584292   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:37.584345   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:37.620917   78080 cri.go:89] found id: ""
	I0729 18:29:37.620943   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.620951   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:37.620958   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:37.621022   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:37.659381   78080 cri.go:89] found id: ""
	I0729 18:29:37.659405   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.659414   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:37.659419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:37.659486   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:37.701337   78080 cri.go:89] found id: ""
	I0729 18:29:37.701360   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.701368   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:37.701373   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:37.701426   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:37.737142   78080 cri.go:89] found id: ""
	I0729 18:29:37.737168   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.737177   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:37.737186   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:37.737201   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:37.789951   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:37.789992   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:37.804759   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:37.804784   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:37.881777   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:37.881794   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:37.881808   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:37.970593   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:37.970625   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:40.511557   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:40.525472   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:40.525527   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:40.564227   78080 cri.go:89] found id: ""
	I0729 18:29:40.564253   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.564263   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:40.564270   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:40.564336   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:40.600384   78080 cri.go:89] found id: ""
	I0729 18:29:40.600409   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.600417   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:40.600423   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:40.600475   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:40.634819   78080 cri.go:89] found id: ""
	I0729 18:29:40.634843   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.634858   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:40.634866   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:40.634913   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:40.669963   78080 cri.go:89] found id: ""
	I0729 18:29:40.669991   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.669999   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:40.670006   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:40.670069   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:40.705680   78080 cri.go:89] found id: ""
	I0729 18:29:40.705705   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.705714   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:40.705719   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:40.705775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:40.743691   78080 cri.go:89] found id: ""
	I0729 18:29:40.743715   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.743725   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:40.743732   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:40.743820   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:40.783858   78080 cri.go:89] found id: ""
	I0729 18:29:40.783889   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.783898   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:40.783903   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:40.783953   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:40.821499   78080 cri.go:89] found id: ""
	I0729 18:29:40.821527   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.821537   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:40.821547   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:40.821562   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:40.874941   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:40.874972   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:40.888034   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:40.888057   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:40.960013   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:40.960032   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:40.960044   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:41.043013   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:41.043042   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:37.435119   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:39.435967   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:41.934232   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:41.739070   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.739497   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:40.543453   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.042528   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.583555   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:43.597120   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:43.597193   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:43.631500   78080 cri.go:89] found id: ""
	I0729 18:29:43.631526   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.631535   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:43.631542   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:43.631607   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:43.667003   78080 cri.go:89] found id: ""
	I0729 18:29:43.667029   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.667037   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:43.667042   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:43.667102   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:43.701471   78080 cri.go:89] found id: ""
	I0729 18:29:43.701502   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.701510   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:43.701515   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:43.701569   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:43.740037   78080 cri.go:89] found id: ""
	I0729 18:29:43.740058   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.740067   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:43.740074   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:43.740145   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:43.772584   78080 cri.go:89] found id: ""
	I0729 18:29:43.772610   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.772620   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:43.772626   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:43.772689   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:43.806340   78080 cri.go:89] found id: ""
	I0729 18:29:43.806382   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.806393   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:43.806401   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:43.806480   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:43.840085   78080 cri.go:89] found id: ""
	I0729 18:29:43.840109   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.840118   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:43.840133   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:43.840198   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:43.873412   78080 cri.go:89] found id: ""
	I0729 18:29:43.873438   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.873448   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:43.873458   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:43.873473   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:43.928762   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:43.928790   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:43.944129   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:43.944156   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:44.017330   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:44.017349   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:44.017361   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:44.106858   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:44.106915   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:46.651050   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:46.665253   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:46.665310   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:46.698846   78080 cri.go:89] found id: ""
	I0729 18:29:46.698871   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.698881   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:46.698888   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:46.698956   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:46.734354   78080 cri.go:89] found id: ""
	I0729 18:29:46.734395   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.734405   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:46.734413   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:46.734468   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:46.771978   78080 cri.go:89] found id: ""
	I0729 18:29:46.771999   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.772007   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:46.772012   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:46.772059   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:46.807231   78080 cri.go:89] found id: ""
	I0729 18:29:46.807255   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.807263   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:46.807272   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:46.807329   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:46.842257   78080 cri.go:89] found id: ""
	I0729 18:29:46.842278   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.842306   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:46.842312   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:46.842373   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:46.876287   78080 cri.go:89] found id: ""
	I0729 18:29:46.876309   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.876317   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:46.876323   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:46.876389   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:46.909695   78080 cri.go:89] found id: ""
	I0729 18:29:46.909719   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.909726   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:46.909731   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:46.909806   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:46.951768   78080 cri.go:89] found id: ""
	I0729 18:29:46.951798   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.951807   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:46.951815   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:46.951825   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:47.025467   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:47.025485   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:47.025497   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:47.106336   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:47.106391   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:47.145652   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:47.145682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:47.200857   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:47.200886   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:43.935210   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:46.434346   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:45.739606   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:48.240282   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:45.544442   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:48.042872   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:49.715401   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:49.729703   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:49.729776   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:49.770016   78080 cri.go:89] found id: ""
	I0729 18:29:49.770039   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.770062   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:49.770070   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:49.770127   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:49.805464   78080 cri.go:89] found id: ""
	I0729 18:29:49.805487   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.805495   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:49.805500   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:49.805560   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:49.838739   78080 cri.go:89] found id: ""
	I0729 18:29:49.838770   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.838782   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:49.838789   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:49.838861   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:49.881168   78080 cri.go:89] found id: ""
	I0729 18:29:49.881194   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.881202   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:49.881208   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:49.881269   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:49.919978   78080 cri.go:89] found id: ""
	I0729 18:29:49.919999   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.920006   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:49.920012   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:49.920079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:49.958971   78080 cri.go:89] found id: ""
	I0729 18:29:49.958996   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.959006   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:49.959013   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:49.959063   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:50.001253   78080 cri.go:89] found id: ""
	I0729 18:29:50.001281   78080 logs.go:276] 0 containers: []
	W0729 18:29:50.001291   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:50.001298   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:50.001362   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:50.038729   78080 cri.go:89] found id: ""
	I0729 18:29:50.038755   78080 logs.go:276] 0 containers: []
	W0729 18:29:50.038766   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:50.038776   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:50.038789   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:50.082540   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:50.082567   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:50.132372   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:50.132413   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:50.146806   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:50.146835   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:50.214495   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:50.214515   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:50.214532   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:48.435540   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.935475   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.240626   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.739158   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.044073   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.047924   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:54.542657   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.793987   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:52.808085   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:52.808149   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:52.844869   78080 cri.go:89] found id: ""
	I0729 18:29:52.844904   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.844917   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:52.844925   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:52.844986   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:52.878097   78080 cri.go:89] found id: ""
	I0729 18:29:52.878122   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.878135   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:52.878142   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:52.878191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:52.910843   78080 cri.go:89] found id: ""
	I0729 18:29:52.910884   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.910894   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:52.910902   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:52.910953   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:52.943233   78080 cri.go:89] found id: ""
	I0729 18:29:52.943257   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.943267   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:52.943274   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:52.943335   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:52.978354   78080 cri.go:89] found id: ""
	I0729 18:29:52.978402   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.978413   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:52.978423   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:52.978503   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:53.011238   78080 cri.go:89] found id: ""
	I0729 18:29:53.011266   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.011276   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:53.011283   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:53.011336   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:53.048787   78080 cri.go:89] found id: ""
	I0729 18:29:53.048817   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.048827   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:53.048834   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:53.048900   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:53.086108   78080 cri.go:89] found id: ""
	I0729 18:29:53.086135   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.086156   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:53.086176   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:53.086195   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:53.137552   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:53.137580   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:53.151308   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:53.151333   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:53.225968   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:53.225992   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:53.226004   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:53.308111   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:53.308145   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:55.850207   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:55.864003   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:55.864054   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:55.898109   78080 cri.go:89] found id: ""
	I0729 18:29:55.898134   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.898142   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:55.898148   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:55.898201   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:55.931616   78080 cri.go:89] found id: ""
	I0729 18:29:55.931643   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.931653   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:55.931660   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:55.931719   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:55.969034   78080 cri.go:89] found id: ""
	I0729 18:29:55.969063   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.969073   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:55.969080   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:55.969142   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:56.007552   78080 cri.go:89] found id: ""
	I0729 18:29:56.007576   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.007586   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:56.007592   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:56.007653   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:56.044342   78080 cri.go:89] found id: ""
	I0729 18:29:56.044367   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.044376   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:56.044382   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:56.044437   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:56.078352   78080 cri.go:89] found id: ""
	I0729 18:29:56.078396   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.078412   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:56.078420   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:56.078471   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:56.116505   78080 cri.go:89] found id: ""
	I0729 18:29:56.116532   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.116543   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:56.116551   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:56.116611   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:56.151493   78080 cri.go:89] found id: ""
	I0729 18:29:56.151516   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.151523   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:56.151530   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:56.151542   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:56.206170   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:56.206198   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:56.219658   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:56.219684   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:56.290279   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:56.290300   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:56.290312   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:56.371352   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:56.371382   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:53.434046   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:55.435343   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:55.239055   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:57.241032   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.740003   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:57.041745   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.042416   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:58.908793   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:58.922566   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:58.922626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:58.959375   78080 cri.go:89] found id: ""
	I0729 18:29:58.959397   78080 logs.go:276] 0 containers: []
	W0729 18:29:58.959404   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:58.959410   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:58.959459   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:58.993235   78080 cri.go:89] found id: ""
	I0729 18:29:58.993257   78080 logs.go:276] 0 containers: []
	W0729 18:29:58.993265   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:58.993271   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:58.993331   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:59.028186   78080 cri.go:89] found id: ""
	I0729 18:29:59.028212   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.028220   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:59.028225   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:59.028271   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:59.063589   78080 cri.go:89] found id: ""
	I0729 18:29:59.063619   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.063628   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:59.063635   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:59.063695   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:59.101116   78080 cri.go:89] found id: ""
	I0729 18:29:59.101142   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.101152   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:59.101158   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:59.101208   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:59.135288   78080 cri.go:89] found id: ""
	I0729 18:29:59.135314   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.135324   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:59.135332   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:59.135395   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:59.170520   78080 cri.go:89] found id: ""
	I0729 18:29:59.170549   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.170557   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:59.170562   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:59.170618   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:59.229796   78080 cri.go:89] found id: ""
	I0729 18:29:59.229825   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.229835   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:59.229843   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:59.229871   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:59.244654   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:59.244682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:59.321262   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:59.321286   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:59.321301   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:59.401423   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:59.401459   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:59.442916   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:59.442938   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:01.995116   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:02.008454   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:02.008516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:02.046412   78080 cri.go:89] found id: ""
	I0729 18:30:02.046431   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.046438   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:02.046443   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:02.046487   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:02.082444   78080 cri.go:89] found id: ""
	I0729 18:30:02.082466   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.082476   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:02.082482   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:02.082551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:02.116013   78080 cri.go:89] found id: ""
	I0729 18:30:02.116041   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.116052   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:02.116058   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:02.116127   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:02.155817   78080 cri.go:89] found id: ""
	I0729 18:30:02.155844   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.155854   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:02.155862   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:02.155914   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:02.195518   78080 cri.go:89] found id: ""
	I0729 18:30:02.195548   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.195556   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:02.195563   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:02.195624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:57.934058   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.934547   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.935238   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.742050   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:04.239758   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.043550   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:03.542544   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:02.228248   78080 cri.go:89] found id: ""
	I0729 18:30:02.228274   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.228283   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:02.228289   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:02.228370   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:02.262441   78080 cri.go:89] found id: ""
	I0729 18:30:02.262469   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.262479   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:02.262486   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:02.262546   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:02.296900   78080 cri.go:89] found id: ""
	I0729 18:30:02.296930   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.296937   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:02.296953   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:02.296965   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:02.352356   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:02.352389   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:02.366336   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:02.366365   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:02.441367   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:02.441389   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:02.441403   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:02.524134   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:02.524173   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:05.071581   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:05.085481   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:05.085535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:05.121610   78080 cri.go:89] found id: ""
	I0729 18:30:05.121636   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.121644   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:05.121652   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:05.121716   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:05.157382   78080 cri.go:89] found id: ""
	I0729 18:30:05.157406   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.157413   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:05.157418   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:05.157478   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:05.195552   78080 cri.go:89] found id: ""
	I0729 18:30:05.195582   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.195593   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:05.195600   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:05.195657   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:05.231071   78080 cri.go:89] found id: ""
	I0729 18:30:05.231095   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.231103   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:05.231108   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:05.231165   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:05.267445   78080 cri.go:89] found id: ""
	I0729 18:30:05.267474   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.267485   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:05.267493   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:05.267555   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:05.304258   78080 cri.go:89] found id: ""
	I0729 18:30:05.304279   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.304286   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:05.304291   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:05.304338   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:05.339155   78080 cri.go:89] found id: ""
	I0729 18:30:05.339176   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.339184   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:05.339190   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:05.339243   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:05.375291   78080 cri.go:89] found id: ""
	I0729 18:30:05.375328   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.375337   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:05.375346   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:05.375361   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:05.446196   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:05.446221   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:05.446236   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:05.529421   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:05.529457   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:05.570234   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:05.570269   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:05.629349   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:05.629391   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:04.434625   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:06.934246   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:06.239886   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.242421   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:05.543394   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.042242   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.151320   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:08.165983   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:08.166045   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:08.205703   78080 cri.go:89] found id: ""
	I0729 18:30:08.205726   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.205733   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:08.205738   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:08.205786   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:08.245919   78080 cri.go:89] found id: ""
	I0729 18:30:08.245946   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.245957   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:08.245964   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:08.246024   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:08.286595   78080 cri.go:89] found id: ""
	I0729 18:30:08.286621   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.286631   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:08.286638   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:08.286700   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:08.330032   78080 cri.go:89] found id: ""
	I0729 18:30:08.330060   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.330070   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:08.330077   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:08.330140   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:08.362535   78080 cri.go:89] found id: ""
	I0729 18:30:08.362567   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.362578   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:08.362586   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:08.362645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:08.397648   78080 cri.go:89] found id: ""
	I0729 18:30:08.397678   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.397688   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:08.397704   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:08.397766   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:08.433615   78080 cri.go:89] found id: ""
	I0729 18:30:08.433693   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.433716   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:08.433734   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:08.433809   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:08.465765   78080 cri.go:89] found id: ""
	I0729 18:30:08.465792   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.465803   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:08.465814   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:08.465829   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:08.536332   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:08.536360   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:08.536375   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:08.613737   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:08.613776   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:08.659707   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:08.659736   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:08.712702   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:08.712736   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:11.226660   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:11.240852   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:11.240919   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:11.277632   78080 cri.go:89] found id: ""
	I0729 18:30:11.277664   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.277675   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:11.277682   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:11.277751   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:11.312458   78080 cri.go:89] found id: ""
	I0729 18:30:11.312478   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.312485   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:11.312491   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:11.312551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:11.350375   78080 cri.go:89] found id: ""
	I0729 18:30:11.350406   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.350416   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:11.350424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:11.350486   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:11.389280   78080 cri.go:89] found id: ""
	I0729 18:30:11.389307   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.389317   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:11.389324   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:11.389382   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:11.424907   78080 cri.go:89] found id: ""
	I0729 18:30:11.424936   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.424944   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:11.424949   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:11.425009   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:11.480686   78080 cri.go:89] found id: ""
	I0729 18:30:11.480713   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.480720   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:11.480726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:11.480778   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:11.514831   78080 cri.go:89] found id: ""
	I0729 18:30:11.514857   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.514864   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:11.514870   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:11.514917   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:11.547930   78080 cri.go:89] found id: ""
	I0729 18:30:11.547955   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.547964   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:11.547974   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:11.547989   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:11.586068   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:11.586098   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:11.646857   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:11.646892   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:11.663549   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:11.663576   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:11.731362   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:11.731383   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:11.731397   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:08.934638   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:11.434765   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:10.738608   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:12.740637   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:10.042514   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:12.042731   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.042952   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.315531   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:14.330485   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:14.330544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:14.363403   78080 cri.go:89] found id: ""
	I0729 18:30:14.363433   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.363444   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:14.363451   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:14.363516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:14.401204   78080 cri.go:89] found id: ""
	I0729 18:30:14.401227   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.401234   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:14.401240   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:14.401301   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:14.436737   78080 cri.go:89] found id: ""
	I0729 18:30:14.436765   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.436775   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:14.436782   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:14.436844   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:14.471376   78080 cri.go:89] found id: ""
	I0729 18:30:14.471403   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.471411   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:14.471419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:14.471478   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:14.506883   78080 cri.go:89] found id: ""
	I0729 18:30:14.506914   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.506925   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:14.506932   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:14.506990   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:14.546444   78080 cri.go:89] found id: ""
	I0729 18:30:14.546469   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.546479   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:14.546486   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:14.546552   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:14.580282   78080 cri.go:89] found id: ""
	I0729 18:30:14.580313   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.580320   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:14.580326   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:14.580387   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:14.614185   78080 cri.go:89] found id: ""
	I0729 18:30:14.614210   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.614220   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:14.614231   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:14.614246   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:14.652588   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:14.652610   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:14.706056   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:14.706090   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:14.719332   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:14.719356   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:14.792087   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:14.792115   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:14.792136   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:13.934967   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:16.435238   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.740676   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:17.239466   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:19.239656   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:16.541564   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:18.547053   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:17.375639   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:17.389473   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:17.389535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:17.424485   78080 cri.go:89] found id: ""
	I0729 18:30:17.424513   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.424521   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:17.424527   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:17.424572   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:17.461100   78080 cri.go:89] found id: ""
	I0729 18:30:17.461129   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.461136   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:17.461141   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:17.461191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:17.494866   78080 cri.go:89] found id: ""
	I0729 18:30:17.494894   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.494902   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:17.494907   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:17.494983   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:17.529897   78080 cri.go:89] found id: ""
	I0729 18:30:17.529924   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.529934   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:17.529940   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:17.530002   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:17.569870   78080 cri.go:89] found id: ""
	I0729 18:30:17.569897   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.569905   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:17.569910   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:17.569958   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:17.605324   78080 cri.go:89] found id: ""
	I0729 18:30:17.605364   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.605384   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:17.605392   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:17.605457   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:17.640552   78080 cri.go:89] found id: ""
	I0729 18:30:17.640583   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.640595   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:17.640602   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:17.640668   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:17.679769   78080 cri.go:89] found id: ""
	I0729 18:30:17.679800   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.679808   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:17.679827   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:17.679843   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:17.757782   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:17.757814   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:17.803850   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:17.803878   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:17.857987   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:17.858017   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:17.871062   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:17.871086   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:17.940456   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:20.441171   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:20.454752   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:20.454824   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:20.490744   78080 cri.go:89] found id: ""
	I0729 18:30:20.490773   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.490783   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:20.490791   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:20.490853   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:20.524406   78080 cri.go:89] found id: ""
	I0729 18:30:20.524437   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.524448   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:20.524463   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:20.524515   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:20.559225   78080 cri.go:89] found id: ""
	I0729 18:30:20.559257   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.559268   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:20.559275   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:20.559337   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:20.595297   78080 cri.go:89] found id: ""
	I0729 18:30:20.595324   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.595355   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:20.595364   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:20.595436   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:20.632176   78080 cri.go:89] found id: ""
	I0729 18:30:20.632204   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.632215   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:20.632222   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:20.632282   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:20.676600   78080 cri.go:89] found id: ""
	I0729 18:30:20.676625   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.676632   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:20.676638   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:20.676734   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:20.717920   78080 cri.go:89] found id: ""
	I0729 18:30:20.717945   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.717955   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:20.717966   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:20.718021   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:20.756217   78080 cri.go:89] found id: ""
	I0729 18:30:20.756243   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.756253   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:20.756262   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:20.756277   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:20.837150   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:20.837189   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:20.876023   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:20.876050   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:20.932402   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:20.932429   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:20.947422   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:20.947454   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:21.022698   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:18.934790   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.434992   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.242999   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.739073   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.042689   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.042794   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.523141   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:23.538019   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:23.538098   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:23.576953   78080 cri.go:89] found id: ""
	I0729 18:30:23.576979   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.576991   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:23.576998   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:23.577060   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:23.613052   78080 cri.go:89] found id: ""
	I0729 18:30:23.613083   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.613094   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:23.613100   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:23.613170   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:23.648694   78080 cri.go:89] found id: ""
	I0729 18:30:23.648717   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.648725   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:23.648730   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:23.648775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:23.680939   78080 cri.go:89] found id: ""
	I0729 18:30:23.680965   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.680972   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:23.680977   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:23.681032   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:23.716529   78080 cri.go:89] found id: ""
	I0729 18:30:23.716556   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.716564   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:23.716569   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:23.716628   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:23.756833   78080 cri.go:89] found id: ""
	I0729 18:30:23.756860   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.756868   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:23.756873   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:23.756918   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:23.796436   78080 cri.go:89] found id: ""
	I0729 18:30:23.796460   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.796467   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:23.796472   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:23.796519   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:23.839877   78080 cri.go:89] found id: ""
	I0729 18:30:23.839906   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.839914   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:23.839922   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:23.839934   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:23.879423   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:23.879447   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:23.928379   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:23.928408   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:23.942639   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:23.942669   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:24.014068   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:24.014095   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:24.014110   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:26.597923   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:26.610877   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:26.610945   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:26.647550   78080 cri.go:89] found id: ""
	I0729 18:30:26.647579   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.647590   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:26.647598   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:26.647655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:26.681552   78080 cri.go:89] found id: ""
	I0729 18:30:26.681581   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.681589   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:26.681595   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:26.681660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:26.714475   78080 cri.go:89] found id: ""
	I0729 18:30:26.714503   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.714513   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:26.714519   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:26.714588   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:26.748671   78080 cri.go:89] found id: ""
	I0729 18:30:26.748697   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.748707   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:26.748714   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:26.748775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:26.781380   78080 cri.go:89] found id: ""
	I0729 18:30:26.781406   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.781421   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:26.781429   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:26.781483   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:26.815201   78080 cri.go:89] found id: ""
	I0729 18:30:26.815230   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.815243   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:26.815251   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:26.815318   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:26.848600   78080 cri.go:89] found id: ""
	I0729 18:30:26.848628   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.848637   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:26.848644   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:26.848724   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:26.883828   78080 cri.go:89] found id: ""
	I0729 18:30:26.883872   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.883883   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:26.883893   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:26.883908   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:26.936955   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:26.936987   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:26.952212   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:26.952238   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:27.019389   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:27.019413   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:27.019426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:27.095654   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:27.095682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:23.935397   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:26.435231   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:26.238749   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:28.239699   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:25.044320   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:27.542022   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:29.542274   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:29.637269   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:29.652138   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:29.652211   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:29.691063   78080 cri.go:89] found id: ""
	I0729 18:30:29.691094   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.691104   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:29.691111   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:29.691173   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:29.725188   78080 cri.go:89] found id: ""
	I0729 18:30:29.725224   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.725232   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:29.725240   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:29.725308   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:29.764118   78080 cri.go:89] found id: ""
	I0729 18:30:29.764149   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.764159   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:29.764167   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:29.764232   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:29.797884   78080 cri.go:89] found id: ""
	I0729 18:30:29.797909   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.797919   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:29.797927   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:29.797989   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:29.838784   78080 cri.go:89] found id: ""
	I0729 18:30:29.838808   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.838815   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:29.838821   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:29.838885   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:29.872394   78080 cri.go:89] found id: ""
	I0729 18:30:29.872420   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.872427   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:29.872433   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:29.872491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:29.908966   78080 cri.go:89] found id: ""
	I0729 18:30:29.908995   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.909012   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:29.909020   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:29.909081   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:29.946322   78080 cri.go:89] found id: ""
	I0729 18:30:29.946344   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.946352   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:29.946371   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:29.946386   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:30.019133   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:30.019166   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:30.019179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:30.096499   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:30.096532   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:30.136487   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:30.136519   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:30.187341   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:30.187374   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:28.435472   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:30.934817   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:30.739101   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.742029   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.042850   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:34.042919   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.703546   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:32.716981   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:32.717042   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:32.753275   78080 cri.go:89] found id: ""
	I0729 18:30:32.753307   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.753318   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:32.753326   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:32.753393   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:32.789075   78080 cri.go:89] found id: ""
	I0729 18:30:32.789105   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.789116   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:32.789123   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:32.789185   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:32.822945   78080 cri.go:89] found id: ""
	I0729 18:30:32.822971   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.822979   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:32.822984   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:32.823033   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:32.856523   78080 cri.go:89] found id: ""
	I0729 18:30:32.856577   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.856589   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:32.856597   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:32.856661   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:32.895768   78080 cri.go:89] found id: ""
	I0729 18:30:32.895798   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.895810   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:32.895817   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:32.895876   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:32.934990   78080 cri.go:89] found id: ""
	I0729 18:30:32.935030   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.935042   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:32.935054   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:32.935132   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:32.970924   78080 cri.go:89] found id: ""
	I0729 18:30:32.970949   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.970957   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:32.970964   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:32.971022   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:33.004133   78080 cri.go:89] found id: ""
	I0729 18:30:33.004164   78080 logs.go:276] 0 containers: []
	W0729 18:30:33.004173   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:33.004182   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:33.004202   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:33.043432   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:33.043467   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:33.095517   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:33.095554   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:33.108859   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:33.108889   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:33.180661   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:33.180681   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:33.180696   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:35.763324   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:35.777060   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:35.777138   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:35.812601   78080 cri.go:89] found id: ""
	I0729 18:30:35.812636   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.812647   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:35.812654   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:35.812719   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:35.848116   78080 cri.go:89] found id: ""
	I0729 18:30:35.848161   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.848172   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:35.848179   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:35.848240   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:35.895786   78080 cri.go:89] found id: ""
	I0729 18:30:35.895817   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.895829   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:35.895837   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:35.895911   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:35.936753   78080 cri.go:89] found id: ""
	I0729 18:30:35.936780   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.936787   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:35.936794   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:35.936848   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:35.971321   78080 cri.go:89] found id: ""
	I0729 18:30:35.971349   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.971358   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:35.971371   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:35.971434   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:36.018702   78080 cri.go:89] found id: ""
	I0729 18:30:36.018725   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.018732   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:36.018737   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:36.018792   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:36.054829   78080 cri.go:89] found id: ""
	I0729 18:30:36.054865   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.054875   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:36.054882   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:36.054948   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:36.087456   78080 cri.go:89] found id: ""
	I0729 18:30:36.087483   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.087492   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:36.087500   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:36.087512   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:36.140919   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:36.140951   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:36.155581   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:36.155614   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:36.227617   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:36.227642   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:36.227669   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:36.304610   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:36.304651   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:32.935270   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:34.935362   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:35.239258   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:37.242161   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:39.739031   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:36.043489   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:38.542041   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:38.843099   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:38.857571   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:38.857626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:38.890760   78080 cri.go:89] found id: ""
	I0729 18:30:38.890790   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.890801   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:38.890809   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:38.890884   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:38.932701   78080 cri.go:89] found id: ""
	I0729 18:30:38.932738   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.932748   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:38.932755   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:38.932812   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:38.967379   78080 cri.go:89] found id: ""
	I0729 18:30:38.967406   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.967416   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:38.967430   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:38.967490   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:39.000419   78080 cri.go:89] found id: ""
	I0729 18:30:39.000450   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.000459   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:39.000466   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:39.000528   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:39.033764   78080 cri.go:89] found id: ""
	I0729 18:30:39.033793   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.033802   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:39.033807   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:39.033857   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:39.070904   78080 cri.go:89] found id: ""
	I0729 18:30:39.070933   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.070944   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:39.070951   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:39.071010   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:39.107444   78080 cri.go:89] found id: ""
	I0729 18:30:39.107471   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.107480   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:39.107488   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:39.107549   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:39.141392   78080 cri.go:89] found id: ""
	I0729 18:30:39.141423   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.141436   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:39.141449   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:39.141464   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:39.154874   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:39.154905   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:39.229370   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:39.229396   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:39.229413   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:39.310508   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:39.310538   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:39.352547   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:39.352569   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:41.908463   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:41.922132   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:41.922209   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:41.960404   78080 cri.go:89] found id: ""
	I0729 18:30:41.960431   78080 logs.go:276] 0 containers: []
	W0729 18:30:41.960439   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:41.960444   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:41.960498   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:41.994082   78080 cri.go:89] found id: ""
	I0729 18:30:41.994110   78080 logs.go:276] 0 containers: []
	W0729 18:30:41.994117   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:41.994123   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:41.994177   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:42.030301   78080 cri.go:89] found id: ""
	I0729 18:30:42.030322   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.030330   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:42.030336   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:42.030401   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:42.064310   78080 cri.go:89] found id: ""
	I0729 18:30:42.064339   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.064349   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:42.064356   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:42.064413   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:42.097705   78080 cri.go:89] found id: ""
	I0729 18:30:42.097738   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.097748   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:42.097761   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:42.097819   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:42.133254   78080 cri.go:89] found id: ""
	I0729 18:30:42.133282   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.133292   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:42.133299   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:42.133361   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:42.170028   78080 cri.go:89] found id: ""
	I0729 18:30:42.170054   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.170063   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:42.170075   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:42.170141   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:42.205680   78080 cri.go:89] found id: ""
	I0729 18:30:42.205712   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.205723   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:42.205736   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:42.205749   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:37.442211   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:39.934866   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:41.935293   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:42.240035   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:41.041897   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:43.042300   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:42.246322   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:42.246350   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:42.300852   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:42.300884   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:42.316306   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:42.316333   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:42.389898   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:42.389920   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:42.389934   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:44.971238   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:44.984796   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:44.984846   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:45.021842   78080 cri.go:89] found id: ""
	I0729 18:30:45.021868   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.021877   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:45.021885   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:45.021958   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:45.059353   78080 cri.go:89] found id: ""
	I0729 18:30:45.059377   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.059387   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:45.059394   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:45.059456   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:45.094867   78080 cri.go:89] found id: ""
	I0729 18:30:45.094900   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.094911   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:45.094918   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:45.094974   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:45.128589   78080 cri.go:89] found id: ""
	I0729 18:30:45.128614   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.128622   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:45.128628   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:45.128671   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:45.160137   78080 cri.go:89] found id: ""
	I0729 18:30:45.160165   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.160172   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:45.160177   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:45.160228   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:45.205757   78080 cri.go:89] found id: ""
	I0729 18:30:45.205780   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.205787   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:45.205793   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:45.205840   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:45.250056   78080 cri.go:89] found id: ""
	I0729 18:30:45.250084   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.250091   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:45.250096   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:45.250179   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:45.285349   78080 cri.go:89] found id: ""
	I0729 18:30:45.285372   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.285380   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:45.285389   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:45.285401   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:45.364188   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:45.364218   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:45.412638   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:45.412660   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:45.467713   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:45.467745   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:45.483811   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:45.483835   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:45.564866   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:44.434921   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:46.934237   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:44.740648   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:47.239253   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:49.240229   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:45.043415   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:47.542757   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:49.543251   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:48.065579   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:48.079441   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:48.079511   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:48.115540   78080 cri.go:89] found id: ""
	I0729 18:30:48.115569   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.115578   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:48.115586   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:48.115670   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:48.151810   78080 cri.go:89] found id: ""
	I0729 18:30:48.151834   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.151841   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:48.151847   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:48.151913   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:48.187459   78080 cri.go:89] found id: ""
	I0729 18:30:48.187490   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.187500   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:48.187508   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:48.187568   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:48.226804   78080 cri.go:89] found id: ""
	I0729 18:30:48.226835   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.226846   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:48.226853   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:48.226916   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:48.260413   78080 cri.go:89] found id: ""
	I0729 18:30:48.260439   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.260448   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:48.260455   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:48.260517   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:48.296719   78080 cri.go:89] found id: ""
	I0729 18:30:48.296743   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.296751   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:48.296756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:48.296806   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:48.331969   78080 cri.go:89] found id: ""
	I0729 18:30:48.331995   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.332002   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:48.332008   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:48.332055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:48.370593   78080 cri.go:89] found id: ""
	I0729 18:30:48.370618   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.370626   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:48.370634   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:48.370645   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:48.410653   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:48.410679   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:48.465467   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:48.465503   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:48.480025   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:48.480053   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:48.557806   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:48.557824   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:48.557840   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:51.140743   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:51.153970   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:51.154046   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:51.187826   78080 cri.go:89] found id: ""
	I0729 18:30:51.187851   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.187862   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:51.187868   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:51.187922   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:51.226140   78080 cri.go:89] found id: ""
	I0729 18:30:51.226172   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.226182   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:51.226189   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:51.226255   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:51.262321   78080 cri.go:89] found id: ""
	I0729 18:30:51.262349   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.262357   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:51.262378   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:51.262440   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:51.295356   78080 cri.go:89] found id: ""
	I0729 18:30:51.295383   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.295395   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:51.295403   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:51.295467   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:51.328320   78080 cri.go:89] found id: ""
	I0729 18:30:51.328349   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.328361   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:51.328367   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:51.328424   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:51.364202   78080 cri.go:89] found id: ""
	I0729 18:30:51.364233   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.364242   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:51.364249   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:51.364313   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:51.405500   78080 cri.go:89] found id: ""
	I0729 18:30:51.405529   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.405538   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:51.405544   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:51.405606   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:51.443519   78080 cri.go:89] found id: ""
	I0729 18:30:51.443541   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.443548   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:51.443556   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:51.443567   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:51.495560   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:51.495599   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:51.512152   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:51.512178   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:51.590972   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:51.590992   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:51.591021   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:51.688717   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:51.688757   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:48.934577   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:51.437173   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:51.739680   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.238626   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:52.044254   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.545288   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.256011   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:54.270602   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:54.270653   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:54.311547   78080 cri.go:89] found id: ""
	I0729 18:30:54.311574   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.311584   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:54.311592   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:54.311655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:54.347559   78080 cri.go:89] found id: ""
	I0729 18:30:54.347591   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.347602   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:54.347610   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:54.347675   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:54.382180   78080 cri.go:89] found id: ""
	I0729 18:30:54.382205   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.382212   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:54.382217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:54.382264   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:54.415560   78080 cri.go:89] found id: ""
	I0729 18:30:54.415587   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.415594   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:54.415600   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:54.415655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:54.450313   78080 cri.go:89] found id: ""
	I0729 18:30:54.450341   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.450351   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:54.450372   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:54.450439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:54.484649   78080 cri.go:89] found id: ""
	I0729 18:30:54.484678   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.484687   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:54.484694   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:54.484741   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:54.520170   78080 cri.go:89] found id: ""
	I0729 18:30:54.520204   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.520212   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:54.520220   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:54.520270   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:54.562724   78080 cri.go:89] found id: ""
	I0729 18:30:54.562753   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.562762   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:54.562772   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:54.562788   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:54.617461   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:54.617498   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:54.630970   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:54.630993   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:54.699332   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:54.699353   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:54.699366   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:54.779240   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:54.779276   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:53.934151   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:56.434549   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:56.239554   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:58.239583   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:57.041845   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:59.042164   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:57.318673   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:57.332789   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:57.332845   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:57.370434   78080 cri.go:89] found id: ""
	I0729 18:30:57.370461   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.370486   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:57.370492   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:57.370547   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:57.420694   78080 cri.go:89] found id: ""
	I0729 18:30:57.420724   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.420735   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:57.420742   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:57.420808   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:57.469245   78080 cri.go:89] found id: ""
	I0729 18:30:57.469271   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.469282   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:57.469288   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:57.469355   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:57.524937   78080 cri.go:89] found id: ""
	I0729 18:30:57.524963   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.524970   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:57.524976   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:57.525031   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:57.566803   78080 cri.go:89] found id: ""
	I0729 18:30:57.566830   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.566840   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:57.566847   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:57.566910   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:57.602786   78080 cri.go:89] found id: ""
	I0729 18:30:57.602814   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.602821   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:57.602826   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:57.602891   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:57.639319   78080 cri.go:89] found id: ""
	I0729 18:30:57.639347   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.639355   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:57.639361   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:57.639408   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:57.672580   78080 cri.go:89] found id: ""
	I0729 18:30:57.672610   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.672621   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:57.672632   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:57.672647   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:57.751550   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:57.751572   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:57.751586   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:57.840057   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:57.840097   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:57.884698   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:57.884737   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:57.944468   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:57.944497   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:00.459605   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:00.473079   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:00.473138   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:00.508492   78080 cri.go:89] found id: ""
	I0729 18:31:00.508525   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.508536   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:00.508543   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:00.508604   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:00.544844   78080 cri.go:89] found id: ""
	I0729 18:31:00.544875   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.544886   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:00.544899   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:00.544960   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:00.578402   78080 cri.go:89] found id: ""
	I0729 18:31:00.578432   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.578443   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:00.578450   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:00.578508   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:00.611886   78080 cri.go:89] found id: ""
	I0729 18:31:00.611913   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.611922   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:00.611928   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:00.611989   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:00.649126   78080 cri.go:89] found id: ""
	I0729 18:31:00.649153   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.649162   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:00.649168   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:00.649229   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:00.686534   78080 cri.go:89] found id: ""
	I0729 18:31:00.686561   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.686571   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:00.686578   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:00.686639   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:00.718656   78080 cri.go:89] found id: ""
	I0729 18:31:00.718680   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.718690   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:00.718696   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:00.718755   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:00.752740   78080 cri.go:89] found id: ""
	I0729 18:31:00.752766   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.752776   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:00.752786   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:00.752800   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:00.804293   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:00.804323   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:00.817988   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:00.818010   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:00.892178   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:00.892210   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:00.892231   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:00.973164   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:00.973199   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:58.434888   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:00.934518   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:00.239908   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:02.240038   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:04.240420   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:01.542080   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:03.542877   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:04.036213   77627 pod_ready.go:81] duration metric: took 4m0.000109353s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" ...
	E0729 18:31:04.036235   77627 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 18:31:04.036250   77627 pod_ready.go:38] duration metric: took 4m10.564329435s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:04.036294   77627 kubeadm.go:597] duration metric: took 4m18.357564209s to restartPrimaryControlPlane
	W0729 18:31:04.036359   77627 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:31:04.036388   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:31:03.512105   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:03.526536   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:03.526602   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:03.561579   78080 cri.go:89] found id: ""
	I0729 18:31:03.561604   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.561614   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:03.561621   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:03.561681   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:03.603995   78080 cri.go:89] found id: ""
	I0729 18:31:03.604019   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.604028   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:03.604033   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:03.604079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:03.640879   78080 cri.go:89] found id: ""
	I0729 18:31:03.640902   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.640910   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:03.640917   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:03.640971   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:03.675262   78080 cri.go:89] found id: ""
	I0729 18:31:03.675288   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.675296   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:03.675302   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:03.675349   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:03.708094   78080 cri.go:89] found id: ""
	I0729 18:31:03.708128   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.708137   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:03.708142   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:03.708190   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:03.748262   78080 cri.go:89] found id: ""
	I0729 18:31:03.748287   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.748298   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:03.748304   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:03.748360   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:03.789758   78080 cri.go:89] found id: ""
	I0729 18:31:03.789788   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.789800   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:03.789806   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:03.789893   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:03.829253   78080 cri.go:89] found id: ""
	I0729 18:31:03.829280   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.829291   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:03.829299   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:03.829317   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:03.883012   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:03.883044   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:03.899264   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:03.899294   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:03.970241   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:03.970261   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:03.970274   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:04.056205   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:04.056244   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:06.604919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:06.619163   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:06.619242   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:06.656939   78080 cri.go:89] found id: ""
	I0729 18:31:06.656970   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.656982   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:06.656989   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:06.657075   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:06.692577   78080 cri.go:89] found id: ""
	I0729 18:31:06.692608   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.692624   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:06.692632   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:06.692695   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:06.730045   78080 cri.go:89] found id: ""
	I0729 18:31:06.730077   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.730088   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:06.730096   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:06.730179   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:06.771794   78080 cri.go:89] found id: ""
	I0729 18:31:06.771820   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.771830   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:06.771838   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:06.771905   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:06.806149   78080 cri.go:89] found id: ""
	I0729 18:31:06.806177   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.806187   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:06.806194   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:06.806252   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:06.851875   78080 cri.go:89] found id: ""
	I0729 18:31:06.851905   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.851923   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:06.851931   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:06.851996   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:06.890335   78080 cri.go:89] found id: ""
	I0729 18:31:06.890382   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.890393   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:06.890399   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:06.890460   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:06.928873   78080 cri.go:89] found id: ""
	I0729 18:31:06.928902   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.928912   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:06.928922   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:06.928935   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:06.944269   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:06.944295   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:07.011658   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:07.011682   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:07.011697   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:07.109899   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:07.109948   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:07.154569   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:07.154600   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:02.935054   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:05.434752   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:06.242994   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:08.738448   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:09.709101   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:09.722387   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:09.722461   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:09.760443   78080 cri.go:89] found id: ""
	I0729 18:31:09.760471   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.760481   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:09.760488   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:09.760551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:09.796177   78080 cri.go:89] found id: ""
	I0729 18:31:09.796200   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.796209   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:09.796214   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:09.796264   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:09.831955   78080 cri.go:89] found id: ""
	I0729 18:31:09.831983   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.831990   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:09.831995   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:09.832055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:09.863913   78080 cri.go:89] found id: ""
	I0729 18:31:09.863939   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.863949   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:09.863956   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:09.864014   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:09.897553   78080 cri.go:89] found id: ""
	I0729 18:31:09.897575   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.897583   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:09.897588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:09.897645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:09.935203   78080 cri.go:89] found id: ""
	I0729 18:31:09.935221   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.935228   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:09.935238   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:09.935296   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:09.971098   78080 cri.go:89] found id: ""
	I0729 18:31:09.971125   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.971135   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:09.971142   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:09.971224   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:10.006760   78080 cri.go:89] found id: ""
	I0729 18:31:10.006794   78080 logs.go:276] 0 containers: []
	W0729 18:31:10.006804   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:10.006815   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:10.006830   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:10.056037   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:10.056066   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:10.070633   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:10.070660   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:10.139953   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:10.139983   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:10.140002   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:10.220748   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:10.220781   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:07.436020   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:09.934218   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:11.934977   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:10.740109   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:13.239440   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:12.766391   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:12.779837   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:12.779889   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:12.813910   78080 cri.go:89] found id: ""
	I0729 18:31:12.813941   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.813951   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:12.813959   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:12.814008   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:12.848811   78080 cri.go:89] found id: ""
	I0729 18:31:12.848854   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.848865   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:12.848872   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:12.848927   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:12.884740   78080 cri.go:89] found id: ""
	I0729 18:31:12.884769   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.884780   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:12.884786   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:12.884833   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:12.923826   78080 cri.go:89] found id: ""
	I0729 18:31:12.923859   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.923870   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:12.923878   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:12.923930   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:12.959127   78080 cri.go:89] found id: ""
	I0729 18:31:12.959157   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.959168   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:12.959175   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:12.959245   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:12.994384   78080 cri.go:89] found id: ""
	I0729 18:31:12.994417   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.994430   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:12.994439   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:12.994506   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:13.027854   78080 cri.go:89] found id: ""
	I0729 18:31:13.027883   78080 logs.go:276] 0 containers: []
	W0729 18:31:13.027892   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:13.027897   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:13.027951   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:13.062270   78080 cri.go:89] found id: ""
	I0729 18:31:13.062300   78080 logs.go:276] 0 containers: []
	W0729 18:31:13.062310   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:13.062321   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:13.062334   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:13.114473   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:13.114500   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:13.127820   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:13.127845   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:13.195830   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:13.195848   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:13.195862   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:13.281711   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:13.281748   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:15.824456   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:15.837532   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:15.837587   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:15.871706   78080 cri.go:89] found id: ""
	I0729 18:31:15.871739   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.871750   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:15.871757   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:15.871817   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:15.906882   78080 cri.go:89] found id: ""
	I0729 18:31:15.906905   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.906912   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:15.906917   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:15.906976   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:15.943015   78080 cri.go:89] found id: ""
	I0729 18:31:15.943043   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.943057   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:15.943065   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:15.943126   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:15.980501   78080 cri.go:89] found id: ""
	I0729 18:31:15.980528   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.980536   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:15.980542   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:15.980588   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:16.014148   78080 cri.go:89] found id: ""
	I0729 18:31:16.014176   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.014183   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:16.014189   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:16.014236   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:16.048296   78080 cri.go:89] found id: ""
	I0729 18:31:16.048319   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.048326   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:16.048334   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:16.048392   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:16.084328   78080 cri.go:89] found id: ""
	I0729 18:31:16.084350   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.084358   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:16.084363   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:16.084411   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:16.120048   78080 cri.go:89] found id: ""
	I0729 18:31:16.120076   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.120084   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:16.120092   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:16.120105   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:16.173476   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:16.173503   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:16.190200   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:16.190232   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:16.261993   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:16.262014   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:16.262026   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:16.340298   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:16.340331   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:14.434706   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:16.936150   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:15.739493   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:18.239834   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:18.883152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:18.897292   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:18.897360   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:18.931276   78080 cri.go:89] found id: ""
	I0729 18:31:18.931303   78080 logs.go:276] 0 containers: []
	W0729 18:31:18.931313   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:18.931321   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:18.931379   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:18.975803   78080 cri.go:89] found id: ""
	I0729 18:31:18.975832   78080 logs.go:276] 0 containers: []
	W0729 18:31:18.975843   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:18.975853   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:18.975912   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:19.012920   78080 cri.go:89] found id: ""
	I0729 18:31:19.012951   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.012963   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:19.012970   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:19.013031   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:19.047640   78080 cri.go:89] found id: ""
	I0729 18:31:19.047667   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.047679   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:19.047687   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:19.047749   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:19.082495   78080 cri.go:89] found id: ""
	I0729 18:31:19.082522   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.082533   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:19.082540   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:19.082591   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:19.117988   78080 cri.go:89] found id: ""
	I0729 18:31:19.118016   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.118027   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:19.118034   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:19.118096   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:19.153725   78080 cri.go:89] found id: ""
	I0729 18:31:19.153753   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.153764   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:19.153771   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:19.153836   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:19.192827   78080 cri.go:89] found id: ""
	I0729 18:31:19.192857   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.192868   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:19.192879   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:19.192894   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:19.208802   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:19.208833   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:19.285877   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:19.285897   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:19.285909   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:19.366563   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:19.366598   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:19.404563   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:19.404590   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:21.958449   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:21.971674   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:21.971739   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:22.006231   78080 cri.go:89] found id: ""
	I0729 18:31:22.006253   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.006261   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:22.006266   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:22.006314   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:22.042575   78080 cri.go:89] found id: ""
	I0729 18:31:22.042599   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.042609   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:22.042616   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:22.042679   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:22.079446   78080 cri.go:89] found id: ""
	I0729 18:31:22.079471   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.079482   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:22.079489   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:22.079554   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:22.115940   78080 cri.go:89] found id: ""
	I0729 18:31:22.115967   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.115976   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:22.115984   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:22.116055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:22.149420   78080 cri.go:89] found id: ""
	I0729 18:31:22.149447   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.149456   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:22.149461   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:22.149511   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:22.182992   78080 cri.go:89] found id: ""
	I0729 18:31:22.183019   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.183027   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:22.183032   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:22.183090   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:22.218441   78080 cri.go:89] found id: ""
	I0729 18:31:22.218474   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.218487   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:22.218497   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:22.218564   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:19.434020   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:21.434806   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:20.739308   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:22.741502   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:22.263135   78080 cri.go:89] found id: ""
	I0729 18:31:22.263164   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.263173   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:22.263183   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:22.263198   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:22.319010   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:22.319049   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:22.333151   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:22.333179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:22.404661   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:22.404683   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:22.404706   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:22.488497   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:22.488537   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:25.032215   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:25.045114   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:25.045191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:25.082244   78080 cri.go:89] found id: ""
	I0729 18:31:25.082278   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.082289   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:25.082299   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:25.082388   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:25.118295   78080 cri.go:89] found id: ""
	I0729 18:31:25.118318   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.118325   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:25.118331   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:25.118395   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:25.157948   78080 cri.go:89] found id: ""
	I0729 18:31:25.157974   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.157984   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:25.157992   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:25.158054   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:25.194708   78080 cri.go:89] found id: ""
	I0729 18:31:25.194734   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.194743   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:25.194751   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:25.194813   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:25.235923   78080 cri.go:89] found id: ""
	I0729 18:31:25.235952   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.235962   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:25.235969   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:25.236032   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:25.271316   78080 cri.go:89] found id: ""
	I0729 18:31:25.271342   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.271353   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:25.271360   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:25.271422   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:25.309399   78080 cri.go:89] found id: ""
	I0729 18:31:25.309427   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.309438   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:25.309446   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:25.309503   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:25.347979   78080 cri.go:89] found id: ""
	I0729 18:31:25.348009   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.348021   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:25.348031   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:25.348046   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:25.400785   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:25.400812   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:25.413891   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:25.413915   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:25.487721   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:25.487752   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:25.487767   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:25.575500   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:25.575531   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:23.935200   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:26.434289   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:25.240961   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:27.738838   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:27.738866   77859 pod_ready.go:81] duration metric: took 4m0.005785253s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	E0729 18:31:27.738877   77859 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 18:31:27.738887   77859 pod_ready.go:38] duration metric: took 4m4.550102816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:27.738903   77859 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:31:27.738934   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:27.738991   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:27.798686   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:27.798710   77859 cri.go:89] found id: ""
	I0729 18:31:27.798717   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:27.798774   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.804769   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:27.804827   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:27.849829   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:27.849849   77859 cri.go:89] found id: ""
	I0729 18:31:27.849857   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:27.849909   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.854472   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:27.854540   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:27.891637   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:27.891659   77859 cri.go:89] found id: ""
	I0729 18:31:27.891668   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:27.891715   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.896663   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:27.896713   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:27.941948   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:27.941968   77859 cri.go:89] found id: ""
	I0729 18:31:27.941976   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:27.942018   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.946770   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:27.946821   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:27.988118   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:27.988139   77859 cri.go:89] found id: ""
	I0729 18:31:27.988147   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:27.988193   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.992474   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:27.992535   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:28.032779   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:28.032801   77859 cri.go:89] found id: ""
	I0729 18:31:28.032811   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:28.032859   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.037791   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:28.037838   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:28.081087   77859 cri.go:89] found id: ""
	I0729 18:31:28.081115   77859 logs.go:276] 0 containers: []
	W0729 18:31:28.081124   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:28.081131   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:28.081183   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:28.123906   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:28.123927   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:28.123933   77859 cri.go:89] found id: ""
	I0729 18:31:28.123940   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:28.123979   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.128737   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.133127   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:28.133201   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:28.182950   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:28.182985   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:28.241873   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:28.241914   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:28.391355   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:28.391389   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:28.447637   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:28.447671   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:28.496815   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:28.496848   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:28.540617   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:28.540651   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:29.063074   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:29.063116   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:29.123348   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:29.123378   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:29.137340   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:29.137365   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:29.174775   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:29.174810   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:29.227526   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:29.227560   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:29.281814   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:29.281844   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:28.121761   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:28.136756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:28.136813   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:28.175461   78080 cri.go:89] found id: ""
	I0729 18:31:28.175491   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.175502   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:28.175509   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:28.175567   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:28.215024   78080 cri.go:89] found id: ""
	I0729 18:31:28.215046   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.215055   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:28.215060   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:28.215122   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:28.253999   78080 cri.go:89] found id: ""
	I0729 18:31:28.254023   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.254031   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:28.254037   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:28.254090   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:28.287902   78080 cri.go:89] found id: ""
	I0729 18:31:28.287929   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.287940   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:28.287948   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:28.288006   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:28.322390   78080 cri.go:89] found id: ""
	I0729 18:31:28.322422   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.322433   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:28.322441   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:28.322500   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:28.356951   78080 cri.go:89] found id: ""
	I0729 18:31:28.356980   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.356991   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:28.356999   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:28.357060   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:28.393439   78080 cri.go:89] found id: ""
	I0729 18:31:28.393461   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.393471   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:28.393477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:28.393535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:28.431827   78080 cri.go:89] found id: ""
	I0729 18:31:28.431858   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.431868   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:28.431878   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:28.431892   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:28.509279   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:28.509315   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:28.564036   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:28.564064   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:28.626970   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:28.627000   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:28.641417   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:28.641446   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:28.713406   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:31.213942   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:31.228942   78080 kubeadm.go:597] duration metric: took 4m3.040952507s to restartPrimaryControlPlane
	W0729 18:31:31.229020   78080 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:31:31.229042   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:31:31.696335   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:31.711230   78080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:31:31.720924   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:31:31.730348   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:31:31.730378   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:31:31.730418   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:31:31.739761   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:31:31.739810   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:31:31.749021   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:31:31.758107   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:31:31.758155   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:31:31.768326   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:31:31.777347   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:31:31.777388   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:31:31.786752   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:31:31.795728   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:31:31.795776   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:31:31.805369   78080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:31:31.883678   78080 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:31:31.883751   78080 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:31:32.040989   78080 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:31:32.041127   78080 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:31:32.041259   78080 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:31:32.261525   78080 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:31:28.434784   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:30.435227   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:32.263137   78080 out.go:204]   - Generating certificates and keys ...
	I0729 18:31:32.263242   78080 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:31:32.263349   78080 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:31:32.263461   78080 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:31:32.263554   78080 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:31:32.263640   78080 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:31:32.263724   78080 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:31:32.263801   78080 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:31:32.263872   78080 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:31:32.263993   78080 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:31:32.264109   78080 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:31:32.264164   78080 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:31:32.264255   78080 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:31:32.435248   78080 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:31:32.509478   78080 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:31:32.737003   78080 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:31:33.079523   78080 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:31:33.099871   78080 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:31:33.101450   78080 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:31:33.101520   78080 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:31:33.242577   78080 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:31:31.826678   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:31.845448   77859 api_server.go:72] duration metric: took 4m16.365262679s to wait for apiserver process to appear ...
	I0729 18:31:31.845478   77859 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:31:31.845519   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:31.845568   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:31.889194   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:31.889226   77859 cri.go:89] found id: ""
	I0729 18:31:31.889236   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:31.889290   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.894167   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:31.894271   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:31.936287   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:31.936306   77859 cri.go:89] found id: ""
	I0729 18:31:31.936315   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:31.936367   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.941051   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:31.941110   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:31.978033   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:31.978057   77859 cri.go:89] found id: ""
	I0729 18:31:31.978066   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:31.978115   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.982632   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:31.982704   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:32.023792   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:32.023812   77859 cri.go:89] found id: ""
	I0729 18:31:32.023820   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:32.023875   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.028309   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:32.028367   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:32.071944   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:32.071966   77859 cri.go:89] found id: ""
	I0729 18:31:32.071975   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:32.072033   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.076171   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:32.076252   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:32.111357   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:32.111379   77859 cri.go:89] found id: ""
	I0729 18:31:32.111389   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:32.111446   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.115718   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:32.115775   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:32.168552   77859 cri.go:89] found id: ""
	I0729 18:31:32.168586   77859 logs.go:276] 0 containers: []
	W0729 18:31:32.168597   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:32.168604   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:32.168686   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:32.210002   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:32.210027   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:32.210034   77859 cri.go:89] found id: ""
	I0729 18:31:32.210043   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:32.210090   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.214929   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.220097   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:32.220121   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:32.270343   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:32.270384   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:32.329269   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:32.329303   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:32.388361   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:32.388388   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:32.430072   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:32.430108   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:32.471669   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:32.471701   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:32.508395   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:32.508424   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:32.548968   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:32.549001   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:32.605269   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:32.605306   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:32.642298   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:32.642330   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:32.659407   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:32.659431   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:32.776509   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:32.776544   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:32.832365   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:32.832395   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:35.748109   77627 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.711694865s)
	I0729 18:31:35.748184   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:35.765137   77627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:31:35.775945   77627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:31:35.786206   77627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:31:35.786232   77627 kubeadm.go:157] found existing configuration files:
	
	I0729 18:31:35.786284   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:31:35.797157   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:31:35.797218   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:31:35.810497   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:31:35.821537   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:31:35.821603   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:31:35.832985   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:31:35.842247   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:31:35.842309   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:31:35.852578   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:31:35.861798   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:31:35.861858   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:31:35.872903   77627 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:31:35.926675   77627 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 18:31:35.926872   77627 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:31:36.089002   77627 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:31:36.089179   77627 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:31:36.089310   77627 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:31:36.321844   77627 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:31:33.244436   78080 out.go:204]   - Booting up control plane ...
	I0729 18:31:33.244570   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:31:33.245677   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:31:33.249530   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:31:33.250262   78080 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:31:33.261418   78080 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:31:36.324255   77627 out.go:204]   - Generating certificates and keys ...
	I0729 18:31:36.324352   77627 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:31:36.324435   77627 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:31:36.324539   77627 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:31:36.324619   77627 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:31:36.324707   77627 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:31:36.324780   77627 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:31:36.324864   77627 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:31:36.324945   77627 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:31:36.325036   77627 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:31:36.325175   77627 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:31:36.325340   77627 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:31:36.325425   77627 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:31:36.815491   77627 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:31:36.870914   77627 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:31:36.957705   77627 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:31:37.074845   77627 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:31:37.220920   77627 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:31:37.221651   77627 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:31:37.224384   77627 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:31:32.435653   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:34.933615   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:36.935070   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:35.792366   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:31:35.801160   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 200:
	ok
	I0729 18:31:35.804043   77859 api_server.go:141] control plane version: v1.30.3
	I0729 18:31:35.804063   77859 api_server.go:131] duration metric: took 3.958578435s to wait for apiserver health ...
	I0729 18:31:35.804072   77859 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:31:35.804099   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:35.804140   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:35.845977   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:35.846003   77859 cri.go:89] found id: ""
	I0729 18:31:35.846018   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:35.846072   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.851227   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:35.851302   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:35.892117   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:35.892142   77859 cri.go:89] found id: ""
	I0729 18:31:35.892158   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:35.892215   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.897136   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:35.897216   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:35.941512   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:35.941532   77859 cri.go:89] found id: ""
	I0729 18:31:35.941541   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:35.941598   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.946072   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:35.946124   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:35.984306   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:35.984327   77859 cri.go:89] found id: ""
	I0729 18:31:35.984335   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:35.984381   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.988605   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:35.988671   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:36.031476   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:36.031504   77859 cri.go:89] found id: ""
	I0729 18:31:36.031514   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:36.031567   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.037262   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:36.037319   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:36.078054   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:36.078076   77859 cri.go:89] found id: ""
	I0729 18:31:36.078084   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:36.078134   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.082628   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:36.082693   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:36.122768   77859 cri.go:89] found id: ""
	I0729 18:31:36.122791   77859 logs.go:276] 0 containers: []
	W0729 18:31:36.122799   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:36.122804   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:36.122849   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:36.166611   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:36.166636   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:36.166642   77859 cri.go:89] found id: ""
	I0729 18:31:36.166650   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:36.166712   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.171240   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.175336   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:36.175354   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:36.233224   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:36.233255   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:36.282788   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:36.282820   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:36.675615   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:36.675660   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:36.731559   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:36.731602   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:36.747814   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:36.747845   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:36.786940   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:36.787036   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:36.829659   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:36.829694   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:36.865907   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:36.865939   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:36.908399   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:36.908427   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:37.012220   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:37.012255   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:37.063429   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:37.063463   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:37.107615   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:37.107654   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:39.655973   77859 system_pods.go:59] 8 kube-system pods found
	I0729 18:31:39.656011   77859 system_pods.go:61] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running
	I0729 18:31:39.656019   77859 system_pods.go:61] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running
	I0729 18:31:39.656025   77859 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running
	I0729 18:31:39.656032   77859 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running
	I0729 18:31:39.656037   77859 system_pods.go:61] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running
	I0729 18:31:39.656043   77859 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running
	I0729 18:31:39.656051   77859 system_pods.go:61] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:31:39.656057   77859 system_pods.go:61] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running
	I0729 18:31:39.656068   77859 system_pods.go:74] duration metric: took 3.851988452s to wait for pod list to return data ...
	I0729 18:31:39.656081   77859 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:31:39.658999   77859 default_sa.go:45] found service account: "default"
	I0729 18:31:39.659024   77859 default_sa.go:55] duration metric: took 2.935237ms for default service account to be created ...
	I0729 18:31:39.659034   77859 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:31:39.664926   77859 system_pods.go:86] 8 kube-system pods found
	I0729 18:31:39.664952   77859 system_pods.go:89] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running
	I0729 18:31:39.664959   77859 system_pods.go:89] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running
	I0729 18:31:39.664966   77859 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running
	I0729 18:31:39.664973   77859 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running
	I0729 18:31:39.664979   77859 system_pods.go:89] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running
	I0729 18:31:39.664987   77859 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running
	I0729 18:31:39.665003   77859 system_pods.go:89] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:31:39.665013   77859 system_pods.go:89] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running
	I0729 18:31:39.665025   77859 system_pods.go:126] duration metric: took 5.974722ms to wait for k8s-apps to be running ...
	I0729 18:31:39.665036   77859 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:31:39.665093   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:39.685280   77859 system_svc.go:56] duration metric: took 20.237099ms WaitForService to wait for kubelet
	I0729 18:31:39.685311   77859 kubeadm.go:582] duration metric: took 4m24.205126513s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:31:39.685336   77859 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:31:39.688419   77859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:31:39.688441   77859 node_conditions.go:123] node cpu capacity is 2
	I0729 18:31:39.688455   77859 node_conditions.go:105] duration metric: took 3.111768ms to run NodePressure ...
	I0729 18:31:39.688470   77859 start.go:241] waiting for startup goroutines ...
	I0729 18:31:39.688483   77859 start.go:246] waiting for cluster config update ...
	I0729 18:31:39.688497   77859 start.go:255] writing updated cluster config ...
	I0729 18:31:39.688830   77859 ssh_runner.go:195] Run: rm -f paused
	I0729 18:31:39.739685   77859 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:31:39.741763   77859 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-502055" cluster and "default" namespace by default
	I0729 18:31:37.226046   77627 out.go:204]   - Booting up control plane ...
	I0729 18:31:37.226163   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:31:37.227852   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:31:37.228710   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:31:37.248177   77627 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:31:37.248863   77627 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:31:37.248915   77627 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:31:37.376905   77627 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:31:37.377030   77627 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:31:37.878928   77627 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.066447ms
	I0729 18:31:37.879057   77627 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:31:38.935622   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:41.433736   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:42.880479   77627 kubeadm.go:310] [api-check] The API server is healthy after 5.001345894s
	I0729 18:31:42.892513   77627 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:31:42.910175   77627 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:31:42.948111   77627 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:31:42.948340   77627 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-409322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:31:42.966823   77627 kubeadm.go:310] [bootstrap-token] Using token: f8a98i.3r2is78gllm02lfe
	I0729 18:31:42.968170   77627 out.go:204]   - Configuring RBAC rules ...
	I0729 18:31:42.968304   77627 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:31:42.978257   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:31:42.986458   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:31:42.989744   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:31:42.992484   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:31:42.995162   77627 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:31:43.287739   77627 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:31:43.726370   77627 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:31:44.290225   77627 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:31:44.291166   77627 kubeadm.go:310] 
	I0729 18:31:44.291267   77627 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:31:44.291278   77627 kubeadm.go:310] 
	I0729 18:31:44.291392   77627 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:31:44.291401   77627 kubeadm.go:310] 
	I0729 18:31:44.291436   77627 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:31:44.291530   77627 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:31:44.291589   77627 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:31:44.291606   77627 kubeadm.go:310] 
	I0729 18:31:44.291701   77627 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:31:44.291713   77627 kubeadm.go:310] 
	I0729 18:31:44.291788   77627 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:31:44.291797   77627 kubeadm.go:310] 
	I0729 18:31:44.291860   77627 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:31:44.291954   77627 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:31:44.292052   77627 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:31:44.292070   77627 kubeadm.go:310] 
	I0729 18:31:44.292167   77627 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:31:44.292269   77627 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:31:44.292280   77627 kubeadm.go:310] 
	I0729 18:31:44.292402   77627 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f8a98i.3r2is78gllm02lfe \
	I0729 18:31:44.292543   77627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 18:31:44.292585   77627 kubeadm.go:310] 	--control-plane 
	I0729 18:31:44.292595   77627 kubeadm.go:310] 
	I0729 18:31:44.292710   77627 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:31:44.292732   77627 kubeadm.go:310] 
	I0729 18:31:44.292836   77627 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f8a98i.3r2is78gllm02lfe \
	I0729 18:31:44.293015   77627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 18:31:44.293440   77627 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:31:44.293500   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:31:44.293512   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:31:44.295432   77627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:31:44.296845   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:31:44.308178   77627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:31:44.334403   77627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:31:44.334542   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:44.334562   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-409322 minikube.k8s.io/updated_at=2024_07_29T18_31_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=embed-certs-409322 minikube.k8s.io/primary=true
	I0729 18:31:44.366345   77627 ops.go:34] apiserver oom_adj: -16
	I0729 18:31:44.537970   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:43.433884   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:45.434714   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:45.039020   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:45.538831   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:46.038700   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:46.538761   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.038725   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.538100   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:48.038309   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:48.538896   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:49.039011   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:49.538333   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.435067   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:49.934658   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:50.038548   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:50.538590   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:51.038131   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:51.538253   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.038599   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.538827   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:53.038077   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:53.538860   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:54.038530   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:54.538952   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.433783   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:54.434442   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:56.434864   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:55.038263   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:55.538050   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:56.038006   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:56.538079   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.038042   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.538146   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.696274   77627 kubeadm.go:1113] duration metric: took 13.36179604s to wait for elevateKubeSystemPrivileges
	I0729 18:31:57.696308   77627 kubeadm.go:394] duration metric: took 5m12.066483926s to StartCluster
	I0729 18:31:57.696324   77627 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:31:57.696406   77627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:31:57.698195   77627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:31:57.698479   77627 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:31:57.698592   77627 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:31:57.698674   77627 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-409322"
	I0729 18:31:57.698688   77627 addons.go:69] Setting metrics-server=true in profile "embed-certs-409322"
	I0729 18:31:57.698695   77627 addons.go:69] Setting default-storageclass=true in profile "embed-certs-409322"
	I0729 18:31:57.698714   77627 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-409322"
	I0729 18:31:57.698719   77627 addons.go:234] Setting addon metrics-server=true in "embed-certs-409322"
	W0729 18:31:57.698723   77627 addons.go:243] addon storage-provisioner should already be in state true
	W0729 18:31:57.698729   77627 addons.go:243] addon metrics-server should already be in state true
	I0729 18:31:57.698733   77627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-409322"
	I0729 18:31:57.698755   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.698676   77627 config.go:182] Loaded profile config "embed-certs-409322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:31:57.698760   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.699157   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699169   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699207   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.699170   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699229   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.699209   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.700201   77627 out.go:177] * Verifying Kubernetes components...
	I0729 18:31:57.701577   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:31:57.715130   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44873
	I0729 18:31:57.715156   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I0729 18:31:57.715708   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.715759   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.716320   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.716329   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.716344   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.716345   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.716666   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.716672   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.716868   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.717251   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.717283   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.717715   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41041
	I0729 18:31:57.718172   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.718684   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.718709   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.719111   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.719630   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.719670   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.720815   77627 addons.go:234] Setting addon default-storageclass=true in "embed-certs-409322"
	W0729 18:31:57.720839   77627 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:31:57.720870   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.721233   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.721264   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.733757   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0729 18:31:57.734325   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.735372   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.735397   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.735736   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.735928   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.735939   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0729 18:31:57.736244   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.736923   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.736942   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.737318   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.737664   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.739761   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.740354   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.741103   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
	I0729 18:31:57.741489   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.741979   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.741999   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.742296   77627 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:31:57.742348   77627 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:31:57.742400   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.743411   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.743443   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.743498   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:31:57.743515   77627 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:31:57.743537   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.743682   77627 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:31:57.743697   77627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:31:57.743711   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.748331   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.748743   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.748759   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.748941   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.748986   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.749110   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.749290   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.749423   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.749638   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.749650   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.749671   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.749834   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.749940   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.750051   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.760794   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I0729 18:31:57.761136   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.761574   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.761585   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.761954   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.762133   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.764344   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.764532   77627 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:31:57.764541   77627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:31:57.764555   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.767111   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.767485   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.767498   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.767625   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.767763   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.767875   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.768004   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.965911   77627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:31:57.986557   77627 node_ready.go:35] waiting up to 6m0s for node "embed-certs-409322" to be "Ready" ...
	I0729 18:31:57.995790   77627 node_ready.go:49] node "embed-certs-409322" has status "Ready":"True"
	I0729 18:31:57.995809   77627 node_ready.go:38] duration metric: took 9.222398ms for node "embed-certs-409322" to be "Ready" ...
	I0729 18:31:57.995817   77627 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:58.003516   77627 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace to be "Ready" ...
	I0729 18:31:58.047522   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:31:58.053274   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:31:58.053290   77627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:31:58.074101   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:31:58.074127   77627 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:31:58.088159   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:31:58.097491   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:31:58.097518   77627 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:31:58.125335   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:31:58.628396   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628425   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628466   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628480   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628847   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.628909   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.628918   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.628936   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.628946   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628955   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628914   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.628898   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.629017   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.629046   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.629268   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.629281   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.630616   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.630636   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.630649   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.660029   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.660061   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.660339   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.660358   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.975389   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.975414   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.975721   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.975740   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.975750   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.975760   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.976034   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.976051   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.976063   77627 addons.go:475] Verifying addon metrics-server=true in "embed-certs-409322"
	I0729 18:31:58.978172   77627 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 18:31:58.979568   77627 addons.go:510] duration metric: took 1.280977366s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 18:31:58.935700   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:00.935984   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:00.009825   77627 pod_ready.go:92] pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:00.009846   77627 pod_ready.go:81] duration metric: took 2.006300447s for pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:00.009855   77627 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:02.016463   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:04.515885   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:03.432654   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:05.434708   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:06.517308   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:09.016256   77627 pod_ready.go:92] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.016276   77627 pod_ready.go:81] duration metric: took 9.006414116s for pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.016287   77627 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.021639   77627 pod_ready.go:92] pod "etcd-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.021661   77627 pod_ready.go:81] duration metric: took 5.365088ms for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.021672   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.026599   77627 pod_ready.go:92] pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.026618   77627 pod_ready.go:81] duration metric: took 4.939458ms for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.026629   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.031994   77627 pod_ready.go:92] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.032009   77627 pod_ready.go:81] duration metric: took 5.37307ms for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.032020   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kxf5z" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.036180   77627 pod_ready.go:92] pod "kube-proxy-kxf5z" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.036196   77627 pod_ready.go:81] duration metric: took 4.16934ms for pod "kube-proxy-kxf5z" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.036205   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.414950   77627 pod_ready.go:92] pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.414973   77627 pod_ready.go:81] duration metric: took 378.76116ms for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.414981   77627 pod_ready.go:38] duration metric: took 11.419116871s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:09.414995   77627 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:32:09.415042   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:32:09.434210   77627 api_server.go:72] duration metric: took 11.735691998s to wait for apiserver process to appear ...
	I0729 18:32:09.434240   77627 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:32:09.434260   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:32:09.439755   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I0729 18:32:09.440612   77627 api_server.go:141] control plane version: v1.30.3
	I0729 18:32:09.440631   77627 api_server.go:131] duration metric: took 6.382802ms to wait for apiserver health ...
	I0729 18:32:09.440640   77627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:32:09.617533   77627 system_pods.go:59] 9 kube-system pods found
	I0729 18:32:09.617564   77627 system_pods.go:61] "coredns-7db6d8ff4d-wpnfg" [687cbc8f-370a-4b72-bc1c-6ae36efe890e] Running
	I0729 18:32:09.617569   77627 system_pods.go:61] "coredns-7db6d8ff4d-wztpj" [1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7] Running
	I0729 18:32:09.617572   77627 system_pods.go:61] "etcd-embed-certs-409322" [68de54c3-7d47-4e79-a064-08b013b1d910] Running
	I0729 18:32:09.617575   77627 system_pods.go:61] "kube-apiserver-embed-certs-409322" [dc1a0568-ef7c-493f-91fb-7438456daf6d] Running
	I0729 18:32:09.617579   77627 system_pods.go:61] "kube-controller-manager-embed-certs-409322" [da715e8c-2437-487b-b4e0-c93af2f079f7] Running
	I0729 18:32:09.617582   77627 system_pods.go:61] "kube-proxy-kxf5z" [74ed1812-b3bf-429d-b8f1-bdccb3415fb5] Running
	I0729 18:32:09.617584   77627 system_pods.go:61] "kube-scheduler-embed-certs-409322" [188cf21a-9a8a-45de-9a91-9e593626ce6d] Running
	I0729 18:32:09.617591   77627 system_pods.go:61] "metrics-server-569cc877fc-6q4nl" [57dc61cc-7490-49e5-9d03-c81aa5d25aea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:32:09.617596   77627 system_pods.go:61] "storage-provisioner" [b0b1e31d-9b5c-4e82-aea7-56184832c053] Running
	I0729 18:32:09.617604   77627 system_pods.go:74] duration metric: took 176.958452ms to wait for pod list to return data ...
	I0729 18:32:09.617614   77627 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:32:09.813846   77627 default_sa.go:45] found service account: "default"
	I0729 18:32:09.813871   77627 default_sa.go:55] duration metric: took 196.249412ms for default service account to be created ...
	I0729 18:32:09.813886   77627 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:32:10.019167   77627 system_pods.go:86] 9 kube-system pods found
	I0729 18:32:10.019199   77627 system_pods.go:89] "coredns-7db6d8ff4d-wpnfg" [687cbc8f-370a-4b72-bc1c-6ae36efe890e] Running
	I0729 18:32:10.019208   77627 system_pods.go:89] "coredns-7db6d8ff4d-wztpj" [1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7] Running
	I0729 18:32:10.019214   77627 system_pods.go:89] "etcd-embed-certs-409322" [68de54c3-7d47-4e79-a064-08b013b1d910] Running
	I0729 18:32:10.019220   77627 system_pods.go:89] "kube-apiserver-embed-certs-409322" [dc1a0568-ef7c-493f-91fb-7438456daf6d] Running
	I0729 18:32:10.019227   77627 system_pods.go:89] "kube-controller-manager-embed-certs-409322" [da715e8c-2437-487b-b4e0-c93af2f079f7] Running
	I0729 18:32:10.019233   77627 system_pods.go:89] "kube-proxy-kxf5z" [74ed1812-b3bf-429d-b8f1-bdccb3415fb5] Running
	I0729 18:32:10.019239   77627 system_pods.go:89] "kube-scheduler-embed-certs-409322" [188cf21a-9a8a-45de-9a91-9e593626ce6d] Running
	I0729 18:32:10.019249   77627 system_pods.go:89] "metrics-server-569cc877fc-6q4nl" [57dc61cc-7490-49e5-9d03-c81aa5d25aea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:32:10.019257   77627 system_pods.go:89] "storage-provisioner" [b0b1e31d-9b5c-4e82-aea7-56184832c053] Running
	I0729 18:32:10.019267   77627 system_pods.go:126] duration metric: took 205.375742ms to wait for k8s-apps to be running ...
	I0729 18:32:10.019278   77627 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:32:10.019326   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:32:10.034632   77627 system_svc.go:56] duration metric: took 15.345747ms WaitForService to wait for kubelet
	I0729 18:32:10.034659   77627 kubeadm.go:582] duration metric: took 12.336145267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:32:10.034687   77627 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:32:10.214205   77627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:32:10.214240   77627 node_conditions.go:123] node cpu capacity is 2
	I0729 18:32:10.214255   77627 node_conditions.go:105] duration metric: took 179.559492ms to run NodePressure ...
	I0729 18:32:10.214269   77627 start.go:241] waiting for startup goroutines ...
	I0729 18:32:10.214279   77627 start.go:246] waiting for cluster config update ...
	I0729 18:32:10.214297   77627 start.go:255] writing updated cluster config ...
	I0729 18:32:10.214639   77627 ssh_runner.go:195] Run: rm -f paused
	I0729 18:32:10.264858   77627 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:32:10.266718   77627 out.go:177] * Done! kubectl is now configured to use "embed-certs-409322" cluster and "default" namespace by default
	I0729 18:32:07.934519   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:10.434593   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:13.262907   78080 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:32:13.263487   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:13.263679   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:12.934686   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:13.928481   77394 pod_ready.go:81] duration metric: took 4m0.00080059s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" ...
	E0729 18:32:13.928509   77394 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 18:32:13.928528   77394 pod_ready.go:38] duration metric: took 4m10.042077465s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:13.928554   77394 kubeadm.go:597] duration metric: took 4m18.205651497s to restartPrimaryControlPlane
	W0729 18:32:13.928623   77394 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:32:13.928649   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:32:18.264261   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:18.264554   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:28.265190   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:28.265433   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:40.226240   77394 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.297571665s)
	I0729 18:32:40.226316   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:32:40.243407   77394 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:32:40.254946   77394 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:32:40.264608   77394 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:32:40.264631   77394 kubeadm.go:157] found existing configuration files:
	
	I0729 18:32:40.264675   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:32:40.274180   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:32:40.274231   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:32:40.283752   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:32:40.293163   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:32:40.293232   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:32:40.302533   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:32:40.311972   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:32:40.312024   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:32:40.321513   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:32:40.330546   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:32:40.330599   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:32:40.340190   77394 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:32:40.389517   77394 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 18:32:40.389592   77394 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:32:40.508682   77394 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:32:40.508783   77394 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:32:40.508859   77394 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 18:32:40.517673   77394 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:32:40.520623   77394 out.go:204]   - Generating certificates and keys ...
	I0729 18:32:40.520726   77394 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:32:40.520824   77394 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:32:40.520893   77394 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:32:40.520961   77394 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:32:40.521045   77394 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:32:40.521094   77394 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:32:40.521171   77394 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:32:40.521254   77394 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:32:40.521357   77394 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:32:40.521475   77394 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:32:40.521535   77394 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:32:40.521606   77394 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:32:40.615870   77394 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:32:40.837902   77394 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:32:40.924418   77394 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:32:41.068573   77394 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:32:41.287201   77394 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:32:41.287991   77394 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:32:41.293523   77394 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:32:41.295211   77394 out.go:204]   - Booting up control plane ...
	I0729 18:32:41.295329   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:32:41.295455   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:32:41.295560   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:32:41.317802   77394 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:32:41.324522   77394 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:32:41.324589   77394 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:32:41.463007   77394 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:32:41.463116   77394 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:32:41.982144   77394 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 519.208408ms
	I0729 18:32:41.982263   77394 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:32:46.983564   77394 kubeadm.go:310] [api-check] The API server is healthy after 5.001335599s
	I0729 18:32:46.999811   77394 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:32:47.018194   77394 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:32:47.051359   77394 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:32:47.051564   77394 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-888056 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:32:47.062615   77394 kubeadm.go:310] [bootstrap-token] Using token: a14u5x.5d4oe8yqdl9tiifc
	I0729 18:32:47.064051   77394 out.go:204]   - Configuring RBAC rules ...
	I0729 18:32:47.064187   77394 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:32:47.071856   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:32:47.084985   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:32:47.088622   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:32:47.091797   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:32:47.096194   77394 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:32:47.391394   77394 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:32:47.834314   77394 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:32:48.394665   77394 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:32:48.394689   77394 kubeadm.go:310] 
	I0729 18:32:48.394763   77394 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:32:48.394797   77394 kubeadm.go:310] 
	I0729 18:32:48.394928   77394 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:32:48.394941   77394 kubeadm.go:310] 
	I0729 18:32:48.394979   77394 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:32:48.395058   77394 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:32:48.395126   77394 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:32:48.395141   77394 kubeadm.go:310] 
	I0729 18:32:48.395221   77394 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:32:48.395230   77394 kubeadm.go:310] 
	I0729 18:32:48.395297   77394 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:32:48.395306   77394 kubeadm.go:310] 
	I0729 18:32:48.395374   77394 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:32:48.395467   77394 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:32:48.395554   77394 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:32:48.395563   77394 kubeadm.go:310] 
	I0729 18:32:48.395652   77394 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:32:48.395766   77394 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:32:48.395778   77394 kubeadm.go:310] 
	I0729 18:32:48.395886   77394 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a14u5x.5d4oe8yqdl9tiifc \
	I0729 18:32:48.396030   77394 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 18:32:48.396062   77394 kubeadm.go:310] 	--control-plane 
	I0729 18:32:48.396071   77394 kubeadm.go:310] 
	I0729 18:32:48.396191   77394 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:32:48.396200   77394 kubeadm.go:310] 
	I0729 18:32:48.396276   77394 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a14u5x.5d4oe8yqdl9tiifc \
	I0729 18:32:48.396393   77394 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 18:32:48.397540   77394 kubeadm.go:310] W0729 18:32:40.358164    2949 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 18:32:48.397921   77394 kubeadm.go:310] W0729 18:32:40.359840    2949 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 18:32:48.398071   77394 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:32:48.398090   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:32:48.398099   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:32:48.399641   77394 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:32:48.266531   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:48.266736   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:48.400846   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:32:48.412594   77394 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:32:48.434792   77394 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:32:48.434872   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:48.434907   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-888056 minikube.k8s.io/updated_at=2024_07_29T18_32_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=no-preload-888056 minikube.k8s.io/primary=true
	I0729 18:32:48.672892   77394 ops.go:34] apiserver oom_adj: -16
	I0729 18:32:48.673144   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:49.173811   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:49.673775   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:50.173717   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:50.673774   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:51.174068   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:51.673565   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:52.173431   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:52.673602   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:53.173912   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:53.315565   77394 kubeadm.go:1113] duration metric: took 4.880757535s to wait for elevateKubeSystemPrivileges
	I0729 18:32:53.315609   77394 kubeadm.go:394] duration metric: took 4m57.645527986s to StartCluster
	I0729 18:32:53.315633   77394 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:32:53.315736   77394 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:32:53.317360   77394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:32:53.317579   77394 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:32:53.317669   77394 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:32:53.317784   77394 addons.go:69] Setting storage-provisioner=true in profile "no-preload-888056"
	I0729 18:32:53.317820   77394 addons.go:234] Setting addon storage-provisioner=true in "no-preload-888056"
	I0729 18:32:53.317817   77394 addons.go:69] Setting default-storageclass=true in profile "no-preload-888056"
	W0729 18:32:53.317835   77394 addons.go:243] addon storage-provisioner should already be in state true
	I0729 18:32:53.317840   77394 config.go:182] Loaded profile config "no-preload-888056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:32:53.317836   77394 addons.go:69] Setting metrics-server=true in profile "no-preload-888056"
	I0729 18:32:53.317861   77394 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-888056"
	I0729 18:32:53.317878   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.317882   77394 addons.go:234] Setting addon metrics-server=true in "no-preload-888056"
	W0729 18:32:53.317892   77394 addons.go:243] addon metrics-server should already be in state true
	I0729 18:32:53.317927   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.318302   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318308   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318334   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.318345   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.318301   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318441   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.319022   77394 out.go:177] * Verifying Kubernetes components...
	I0729 18:32:53.320383   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:32:53.335666   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0729 18:32:53.336170   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.336860   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.336896   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.337301   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.338104   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I0729 18:32:53.338137   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40655
	I0729 18:32:53.338545   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.338559   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.338595   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.338614   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.339076   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.339094   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.339163   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.339188   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.339510   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.340089   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.340126   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.340346   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.340557   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.344286   77394 addons.go:234] Setting addon default-storageclass=true in "no-preload-888056"
	W0729 18:32:53.344307   77394 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:32:53.344335   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.344702   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.344727   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.356006   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33765
	I0729 18:32:53.356613   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.357135   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.357159   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.357517   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.357604   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0729 18:32:53.357752   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.358011   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.358472   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.358490   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.358898   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.359110   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.359546   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.360493   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.361662   77394 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:32:53.362464   77394 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:32:53.363294   77394 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:32:53.363311   77394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:32:53.363331   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.364170   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:32:53.364182   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I0729 18:32:53.364186   77394 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:32:53.364205   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.364560   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.365040   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.365061   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.365515   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.365963   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.365983   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.367883   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.368768   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369264   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.369284   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369576   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.369591   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369858   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.369964   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.370009   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.370102   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.370169   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.370198   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.370317   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.370344   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.382571   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0729 18:32:53.382940   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.383311   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.383336   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.383748   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.383946   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.385570   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.385761   77394 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:32:53.385775   77394 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:32:53.385792   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.388411   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.388756   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.388774   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.389017   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.389193   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.389350   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.389463   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.585542   77394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:32:53.645556   77394 node_ready.go:35] waiting up to 6m0s for node "no-preload-888056" to be "Ready" ...
	I0729 18:32:53.657965   77394 node_ready.go:49] node "no-preload-888056" has status "Ready":"True"
	I0729 18:32:53.657997   77394 node_ready.go:38] duration metric: took 12.408834ms for node "no-preload-888056" to be "Ready" ...
	I0729 18:32:53.658010   77394 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:53.673068   77394 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:53.724224   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:32:53.724248   77394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:32:53.763536   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:32:53.774123   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:32:53.812615   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:32:53.812639   77394 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:32:53.945274   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:32:53.945303   77394 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:32:54.107180   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:32:54.184354   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.184379   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.184699   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.184748   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.184762   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.184776   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.184786   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.185015   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.185043   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.185077   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.244759   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.244781   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.245108   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.245156   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.245169   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.782604   77394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.008443119s)
	I0729 18:32:54.782663   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.782676   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.782990   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.783010   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.783020   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.783028   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.783265   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.783283   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946051   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.946074   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.946396   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.946418   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946430   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.946439   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.946680   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.946698   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946710   77394 addons.go:475] Verifying addon metrics-server=true in "no-preload-888056"
	I0729 18:32:54.948362   77394 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 18:32:54.949821   77394 addons.go:510] duration metric: took 1.632153415s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 18:32:55.679655   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:57.680175   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:33:00.179877   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:33:01.180068   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.180094   77394 pod_ready.go:81] duration metric: took 7.506992362s for pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.180106   77394 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.185742   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.185760   77394 pod_ready.go:81] duration metric: took 5.647157ms for pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.185769   77394 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.190056   77394 pod_ready.go:92] pod "etcd-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.190077   77394 pod_ready.go:81] duration metric: took 4.30181ms for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.190085   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.194255   77394 pod_ready.go:92] pod "kube-apiserver-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.194273   77394 pod_ready.go:81] duration metric: took 4.182006ms for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.194284   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.199056   77394 pod_ready.go:92] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.199072   77394 pod_ready.go:81] duration metric: took 4.779158ms for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.199081   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-94ff9" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.578279   77394 pod_ready.go:92] pod "kube-proxy-94ff9" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.578299   77394 pod_ready.go:81] duration metric: took 379.211109ms for pod "kube-proxy-94ff9" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.578308   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:02.378184   77394 pod_ready.go:92] pod "kube-scheduler-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:02.378205   77394 pod_ready.go:81] duration metric: took 799.890202ms for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:02.378212   77394 pod_ready.go:38] duration metric: took 8.720189182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:33:02.378226   77394 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:33:02.378282   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:33:02.396023   77394 api_server.go:72] duration metric: took 9.07841179s to wait for apiserver process to appear ...
	I0729 18:33:02.396050   77394 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:33:02.396070   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:33:02.403736   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 200:
	ok
	I0729 18:33:02.404828   77394 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 18:33:02.404850   77394 api_server.go:131] duration metric: took 8.793481ms to wait for apiserver health ...
	I0729 18:33:02.404858   77394 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:33:02.580656   77394 system_pods.go:59] 9 kube-system pods found
	I0729 18:33:02.580683   77394 system_pods.go:61] "coredns-5cfdc65f69-bbh6c" [66b43af3-78eb-437f-81d7-eedb4cc34349] Running
	I0729 18:33:02.580687   77394 system_pods.go:61] "coredns-5cfdc65f69-j9ddw" [679f8750-86aa-4e00-8291-6996b54b1930] Running
	I0729 18:33:02.580691   77394 system_pods.go:61] "etcd-no-preload-888056" [abcd648d-659a-4f02-a769-f2222eaac945] Running
	I0729 18:33:02.580695   77394 system_pods.go:61] "kube-apiserver-no-preload-888056" [99a48803-06b1-44a6-a0cc-f28f2ba7235f] Running
	I0729 18:33:02.580699   77394 system_pods.go:61] "kube-controller-manager-no-preload-888056" [6bb3d64c-9fef-41ee-a68d-170fac01dec5] Running
	I0729 18:33:02.580702   77394 system_pods.go:61] "kube-proxy-94ff9" [dd06899e-3d54-4b71-bda6-f8c6d06ce100] Running
	I0729 18:33:02.580704   77394 system_pods.go:61] "kube-scheduler-no-preload-888056" [a1b60226-df5e-45ce-8382-a8d277278129] Running
	I0729 18:33:02.580710   77394 system_pods.go:61] "metrics-server-78fcd8795b-9qqmj" [45bbbaf3-cf3e-4db1-9eec-693425bc5dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:33:02.580714   77394 system_pods.go:61] "storage-provisioner" [0aacb67c-abea-47fb-a2f1-f1245e68599a] Running
	I0729 18:33:02.580721   77394 system_pods.go:74] duration metric: took 175.857868ms to wait for pod list to return data ...
	I0729 18:33:02.580728   77394 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:33:02.778962   77394 default_sa.go:45] found service account: "default"
	I0729 18:33:02.778987   77394 default_sa.go:55] duration metric: took 198.250326ms for default service account to be created ...
	I0729 18:33:02.778995   77394 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:33:02.981123   77394 system_pods.go:86] 9 kube-system pods found
	I0729 18:33:02.981159   77394 system_pods.go:89] "coredns-5cfdc65f69-bbh6c" [66b43af3-78eb-437f-81d7-eedb4cc34349] Running
	I0729 18:33:02.981166   77394 system_pods.go:89] "coredns-5cfdc65f69-j9ddw" [679f8750-86aa-4e00-8291-6996b54b1930] Running
	I0729 18:33:02.981175   77394 system_pods.go:89] "etcd-no-preload-888056" [abcd648d-659a-4f02-a769-f2222eaac945] Running
	I0729 18:33:02.981181   77394 system_pods.go:89] "kube-apiserver-no-preload-888056" [99a48803-06b1-44a6-a0cc-f28f2ba7235f] Running
	I0729 18:33:02.981186   77394 system_pods.go:89] "kube-controller-manager-no-preload-888056" [6bb3d64c-9fef-41ee-a68d-170fac01dec5] Running
	I0729 18:33:02.981190   77394 system_pods.go:89] "kube-proxy-94ff9" [dd06899e-3d54-4b71-bda6-f8c6d06ce100] Running
	I0729 18:33:02.981196   77394 system_pods.go:89] "kube-scheduler-no-preload-888056" [a1b60226-df5e-45ce-8382-a8d277278129] Running
	I0729 18:33:02.981206   77394 system_pods.go:89] "metrics-server-78fcd8795b-9qqmj" [45bbbaf3-cf3e-4db1-9eec-693425bc5dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:33:02.981214   77394 system_pods.go:89] "storage-provisioner" [0aacb67c-abea-47fb-a2f1-f1245e68599a] Running
	I0729 18:33:02.981228   77394 system_pods.go:126] duration metric: took 202.226569ms to wait for k8s-apps to be running ...
	I0729 18:33:02.981239   77394 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:33:02.981290   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:33:02.999134   77394 system_svc.go:56] duration metric: took 17.878004ms WaitForService to wait for kubelet
	I0729 18:33:02.999169   77394 kubeadm.go:582] duration metric: took 9.681562891s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:33:02.999187   77394 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:33:03.179246   77394 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:33:03.179274   77394 node_conditions.go:123] node cpu capacity is 2
	I0729 18:33:03.179286   77394 node_conditions.go:105] duration metric: took 180.093491ms to run NodePressure ...
	I0729 18:33:03.179312   77394 start.go:241] waiting for startup goroutines ...
	I0729 18:33:03.179322   77394 start.go:246] waiting for cluster config update ...
	I0729 18:33:03.179344   77394 start.go:255] writing updated cluster config ...
	I0729 18:33:03.179658   77394 ssh_runner.go:195] Run: rm -f paused
	I0729 18:33:03.228664   77394 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 18:33:03.230706   77394 out.go:177] * Done! kubectl is now configured to use "no-preload-888056" cluster and "default" namespace by default
	I0729 18:33:28.269122   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:33:28.269375   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:33:28.269399   78080 kubeadm.go:310] 
	I0729 18:33:28.269433   78080 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:33:28.269471   78080 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:33:28.269480   78080 kubeadm.go:310] 
	I0729 18:33:28.269508   78080 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:33:28.269541   78080 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:33:28.269686   78080 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:33:28.269698   78080 kubeadm.go:310] 
	I0729 18:33:28.269846   78080 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:33:28.269902   78080 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:33:28.269946   78080 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:33:28.269969   78080 kubeadm.go:310] 
	I0729 18:33:28.270132   78080 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:33:28.270246   78080 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:33:28.270258   78080 kubeadm.go:310] 
	I0729 18:33:28.270434   78080 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:33:28.270567   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:33:28.270674   78080 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:33:28.270774   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:33:28.270784   78080 kubeadm.go:310] 
	I0729 18:33:28.271347   78080 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:33:28.271428   78080 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:33:28.271503   78080 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 18:33:28.271650   78080 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 18:33:28.271713   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:33:28.743675   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:33:28.759228   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:33:28.768522   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:33:28.768546   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:33:28.768593   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:33:28.777423   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:33:28.777481   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:33:28.786450   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:33:28.795335   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:33:28.795386   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:33:28.804519   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:33:28.813137   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:33:28.813193   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:33:28.822053   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:33:28.830463   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:33:28.830513   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:33:28.839818   78080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:33:29.066010   78080 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:35:25.197434   78080 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:35:25.197566   78080 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 18:35:25.199476   78080 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:35:25.199554   78080 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:35:25.199667   78080 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:35:25.199800   78080 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:35:25.199937   78080 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:35:25.200054   78080 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:35:25.201801   78080 out.go:204]   - Generating certificates and keys ...
	I0729 18:35:25.201875   78080 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:35:25.201944   78080 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:35:25.202073   78080 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:35:25.202136   78080 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:35:25.202231   78080 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:35:25.202287   78080 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:35:25.202339   78080 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:35:25.202426   78080 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:35:25.202492   78080 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:35:25.202560   78080 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:35:25.202603   78080 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:35:25.202692   78080 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:35:25.202779   78080 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:35:25.202863   78080 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:35:25.202962   78080 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:35:25.203070   78080 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:35:25.203213   78080 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:35:25.203289   78080 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:35:25.203323   78080 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:35:25.203381   78080 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:35:25.204837   78080 out.go:204]   - Booting up control plane ...
	I0729 18:35:25.204920   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:35:25.204985   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:35:25.205053   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:35:25.205146   78080 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:35:25.205274   78080 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:35:25.205316   78080 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:35:25.205379   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.205591   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.205658   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.205828   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.205926   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206142   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206204   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206411   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206488   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206683   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206698   78080 kubeadm.go:310] 
	I0729 18:35:25.206755   78080 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:35:25.206817   78080 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:35:25.206827   78080 kubeadm.go:310] 
	I0729 18:35:25.206860   78080 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:35:25.206890   78080 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:35:25.206975   78080 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:35:25.206985   78080 kubeadm.go:310] 
	I0729 18:35:25.207099   78080 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:35:25.207134   78080 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:35:25.207167   78080 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:35:25.207177   78080 kubeadm.go:310] 
	I0729 18:35:25.207289   78080 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:35:25.207403   78080 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:35:25.207412   78080 kubeadm.go:310] 
	I0729 18:35:25.207532   78080 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:35:25.207640   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:35:25.207754   78080 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:35:25.207821   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:35:25.207854   78080 kubeadm.go:310] 
	I0729 18:35:25.207886   78080 kubeadm.go:394] duration metric: took 7m57.080498205s to StartCluster
	I0729 18:35:25.207923   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:35:25.207983   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:35:25.251803   78080 cri.go:89] found id: ""
	I0729 18:35:25.251841   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.251852   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:35:25.251859   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:35:25.251920   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:35:25.287842   78080 cri.go:89] found id: ""
	I0729 18:35:25.287877   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.287895   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:35:25.287903   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:35:25.287967   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:35:25.324546   78080 cri.go:89] found id: ""
	I0729 18:35:25.324573   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.324582   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:35:25.324588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:35:25.324634   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:35:25.375723   78080 cri.go:89] found id: ""
	I0729 18:35:25.375746   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.375753   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:35:25.375759   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:35:25.375812   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:35:25.412580   78080 cri.go:89] found id: ""
	I0729 18:35:25.412604   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.412612   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:35:25.412617   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:35:25.412664   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:35:25.449360   78080 cri.go:89] found id: ""
	I0729 18:35:25.449397   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.449406   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:35:25.449413   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:35:25.449464   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:35:25.485655   78080 cri.go:89] found id: ""
	I0729 18:35:25.485687   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.485698   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:35:25.485705   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:35:25.485769   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:35:25.521752   78080 cri.go:89] found id: ""
	I0729 18:35:25.521776   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.521783   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:35:25.521792   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:35:25.521808   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:35:25.562894   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:35:25.562922   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:35:25.623879   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:35:25.623912   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:35:25.647315   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:35:25.647341   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:35:25.744827   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:35:25.744850   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:35:25.744865   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0729 18:35:25.849394   78080 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 18:35:25.849445   78080 out.go:239] * 
	W0729 18:35:25.849520   78080 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:35:25.849558   78080 out.go:239] * 
	W0729 18:35:25.850438   78080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:35:25.853770   78080 out.go:177] 
	W0729 18:35:25.854982   78080 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:35:25.855035   78080 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 18:35:25.855060   78080 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 18:35:25.856444   78080 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.606198545Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278127606158351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de3ac9a0-f0bb-4d78-b255-ee4068bc895a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.607174367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3c5f0a7-ba74-4b41-8d18-7ce4cd6371c0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.607277450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3c5f0a7-ba74-4b41-8d18-7ce4cd6371c0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.607329416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e3c5f0a7-ba74-4b41-8d18-7ce4cd6371c0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.639236020Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=035dc1fc-5449-43b5-94f7-48332a49ca36 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.639326714Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=035dc1fc-5449-43b5-94f7-48332a49ca36 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.640989942Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06135b7f-7b29-4e67-b0e7-e9315f510d1b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.641402363Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278127641376644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06135b7f-7b29-4e67-b0e7-e9315f510d1b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.642165746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2f378a7-f0cb-45e2-a501-bd9652734eb1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.642246838Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2f378a7-f0cb-45e2-a501-bd9652734eb1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.642288538Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c2f378a7-f0cb-45e2-a501-bd9652734eb1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.675005987Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c076790-11c3-4497-a353-7fe20d75d7af name=/runtime.v1.RuntimeService/Version
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.675074986Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c076790-11c3-4497-a353-7fe20d75d7af name=/runtime.v1.RuntimeService/Version
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.676427116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8fdb041e-a606-413c-a5c9-a7b8a5b12334 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.676863977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278127676841257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8fdb041e-a606-413c-a5c9-a7b8a5b12334 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.677396089Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e9d978b-b644-47b3-b693-f2636c92d25b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.677447836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e9d978b-b644-47b3-b693-f2636c92d25b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.677478827Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6e9d978b-b644-47b3-b693-f2636c92d25b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.710623002Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7948797b-ea7d-426a-9d10-9e0977dea816 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.710719552Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7948797b-ea7d-426a-9d10-9e0977dea816 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.711945297Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03e7c704-bd42-4810-82c3-38f08a046649 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.712359654Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278127712338466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03e7c704-bd42-4810-82c3-38f08a046649 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.712894779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8234ce70-9041-4140-b3fc-160acf419289 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.712943134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8234ce70-9041-4140-b3fc-160acf419289 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:35:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:35:27.712981649Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8234ce70-9041-4140-b3fc-160acf419289 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 18:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053138] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049545] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.032104] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.544033] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.650966] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.461872] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.060757] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073737] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.211657] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.138817] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.279940] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.393435] systemd-fstab-generator[838]: Ignoring "noauto" option for root device
	[  +0.066263] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.863430] systemd-fstab-generator[963]: Ignoring "noauto" option for root device
	[ +12.812099] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 18:31] systemd-fstab-generator[5034]: Ignoring "noauto" option for root device
	[Jul29 18:33] systemd-fstab-generator[5313]: Ignoring "noauto" option for root device
	[  +0.068948] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:35:27 up 8 min,  0 users,  load average: 0.01, 0.11, 0.07
	Linux old-k8s-version-386663 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 18:35:24 old-k8s-version-386663 kubelet[5488]:         /usr/local/go/src/net/net.go:182 +0x8e
	Jul 29 18:35:24 old-k8s-version-386663 kubelet[5488]: bufio.(*Reader).Read(0xc000c986c0, 0xc00024f1b8, 0x9, 0x9, 0xc000724dc8, 0x40a605, 0xc000785e60)
	Jul 29 18:35:24 old-k8s-version-386663 kubelet[5488]:         /usr/local/go/src/bufio/bufio.go:227 +0x222
	Jul 29 18:35:24 old-k8s-version-386663 kubelet[5488]: io.ReadAtLeast(0x4f04880, 0xc000c986c0, 0xc00024f1b8, 0x9, 0x9, 0x9, 0xc0007e0a40, 0x3f50d20, 0xc000c96660)
	Jul 29 18:35:24 old-k8s-version-386663 kubelet[5488]:         /usr/local/go/src/io/io.go:314 +0x87
	Jul 29 18:35:24 old-k8s-version-386663 kubelet[5488]: io.ReadFull(...)
	Jul 29 18:35:24 old-k8s-version-386663 kubelet[5488]:         /usr/local/go/src/io/io.go:333
	Jul 29 18:35:24 old-k8s-version-386663 kubelet[5488]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc00024f1b8, 0x9, 0x9, 0x4f04880, 0xc000c986c0, 0x0, 0xc000000000, 0xc000c96660, 0xc000116750)
	Jul 29 18:35:24 old-k8s-version-386663 kubelet[5488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	Jul 29 18:35:24 old-k8s-version-386663 kubelet[5488]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc00024f180, 0xc000c8ce10, 0x1, 0x0, 0x0)
	Jul 29 18:35:24 old-k8s-version-386663 kubelet[5488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Jul 29 18:35:24 old-k8s-version-386663 kubelet[5488]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000930540)
	Jul 29 18:35:24 old-k8s-version-386663 kubelet[5488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Jul 29 18:35:24 old-k8s-version-386663 kubelet[5488]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 29 18:35:24 old-k8s-version-386663 kubelet[5488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Jul 29 18:35:24 old-k8s-version-386663 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 29 18:35:24 old-k8s-version-386663 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 29 18:35:25 old-k8s-version-386663 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 29 18:35:25 old-k8s-version-386663 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 29 18:35:25 old-k8s-version-386663 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 29 18:35:25 old-k8s-version-386663 kubelet[5545]: I0729 18:35:25.674065    5545 server.go:416] Version: v1.20.0
	Jul 29 18:35:25 old-k8s-version-386663 kubelet[5545]: I0729 18:35:25.674341    5545 server.go:837] Client rotation is on, will bootstrap in background
	Jul 29 18:35:25 old-k8s-version-386663 kubelet[5545]: I0729 18:35:25.676410    5545 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 29 18:35:25 old-k8s-version-386663 kubelet[5545]: W0729 18:35:25.677298    5545 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 29 18:35:25 old-k8s-version-386663 kubelet[5545]: I0729 18:35:25.677612    5545 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386663 -n old-k8s-version-386663
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386663 -n old-k8s-version-386663: exit status 2 (241.091239ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-386663" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (762.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0729 18:31:52.902465   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 18:31:54.098858   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:31:54.288254   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-502055 -n default-k8s-diff-port-502055
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 18:40:40.271472614 +0000 UTC m=+6296.938840910
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-502055 -n default-k8s-diff-port-502055
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-502055 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-502055 logs -n 25: (1.990083543s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-729010 sudo cat                              | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo find                             | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo crio                             | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-729010                                       | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	| delete  | -p                                                     | disable-driver-mounts-603863 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | disable-driver-mounts-603863                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:19 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-888056             | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-888056                                   | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-409322            | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-502055  | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC |                     |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-386663        | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-888056                  | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-888056 --memory=2200                     | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:33 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-409322                 | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-502055       | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:31 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386663             | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:22:47
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:22:47.218965   78080 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:22:47.219209   78080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:47.219217   78080 out.go:304] Setting ErrFile to fd 2...
	I0729 18:22:47.219222   78080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:47.219370   78080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:22:47.219863   78080 out.go:298] Setting JSON to false
	I0729 18:22:47.220726   78080 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7519,"bootTime":1722269848,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:22:47.220777   78080 start.go:139] virtualization: kvm guest
	I0729 18:22:47.222804   78080 out.go:177] * [old-k8s-version-386663] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:22:47.224119   78080 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 18:22:47.224173   78080 notify.go:220] Checking for updates...
	I0729 18:22:47.226449   78080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:22:47.227676   78080 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:22:47.228809   78080 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:22:47.229914   78080 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:22:47.230906   78080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:22:47.232363   78080 config.go:182] Loaded profile config "old-k8s-version-386663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:22:47.232750   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:47.232814   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:47.247542   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44723
	I0729 18:22:47.247909   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:47.248418   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:22:47.248436   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:47.248786   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:47.248965   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:22:47.250635   78080 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 18:22:47.251760   78080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:22:47.252055   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:47.252098   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:47.266291   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35843
	I0729 18:22:47.266672   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:47.267136   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:22:47.267157   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:47.267492   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:47.267662   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:22:47.303335   78080 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 18:22:47.304503   78080 start.go:297] selected driver: kvm2
	I0729 18:22:47.304513   78080 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:47.304607   78080 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:22:47.305291   78080 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:47.305360   78080 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:22:47.319918   78080 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:22:47.320315   78080 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:22:47.320341   78080 cni.go:84] Creating CNI manager for ""
	I0729 18:22:47.320349   78080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:22:47.320386   78080 start.go:340] cluster config:
	{Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:47.320480   78080 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:47.322357   78080 out.go:177] * Starting "old-k8s-version-386663" primary control-plane node in "old-k8s-version-386663" cluster
	I0729 18:22:43.378634   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:46.450644   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:47.323622   78080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:22:47.323653   78080 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:22:47.323660   78080 cache.go:56] Caching tarball of preloaded images
	I0729 18:22:47.323740   78080 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:22:47.323761   78080 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 18:22:47.323849   78080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json ...
	I0729 18:22:47.324021   78080 start.go:360] acquireMachinesLock for old-k8s-version-386663: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:22:52.530551   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:55.602731   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:01.682636   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:04.754621   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:10.834616   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:13.906688   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:19.986655   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:23.059064   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:29.138659   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:32.210758   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:38.290665   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:41.362732   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:47.442637   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:50.514656   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:56.594611   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:59.666706   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:05.746649   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:08.818685   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:14.898642   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:17.970619   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:24.050664   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:27.122664   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:33.202629   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:36.274678   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:42.354674   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:45.426704   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:51.506670   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:54.578602   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:00.658683   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:03.730663   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:09.810619   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:12.882598   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:18.962612   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:22.034673   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:28.114638   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:31.186598   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:37.266642   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:40.338599   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:46.418679   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:49.490705   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:55.570690   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:58.642719   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:04.722643   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:07.794711   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:13.874638   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:16.946806   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:19.951345   77627 start.go:364] duration metric: took 4m10.060086709s to acquireMachinesLock for "embed-certs-409322"
	I0729 18:26:19.951406   77627 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:19.951414   77627 fix.go:54] fixHost starting: 
	I0729 18:26:19.951732   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:19.951761   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:19.967602   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41827
	I0729 18:26:19.968062   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:19.968486   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:26:19.968505   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:19.968809   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:19.969009   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:19.969135   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:26:19.970757   77627 fix.go:112] recreateIfNeeded on embed-certs-409322: state=Stopped err=<nil>
	I0729 18:26:19.970784   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	W0729 18:26:19.970931   77627 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:19.972631   77627 out.go:177] * Restarting existing kvm2 VM for "embed-certs-409322" ...
	I0729 18:26:19.948656   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:19.948718   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:26:19.949066   77394 buildroot.go:166] provisioning hostname "no-preload-888056"
	I0729 18:26:19.949096   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:26:19.949286   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:26:19.951194   77394 machine.go:97] duration metric: took 4m37.435248922s to provisionDockerMachine
	I0729 18:26:19.951238   77394 fix.go:56] duration metric: took 4m37.45552986s for fixHost
	I0729 18:26:19.951246   77394 start.go:83] releasing machines lock for "no-preload-888056", held for 4m37.455571504s
	W0729 18:26:19.951284   77394 start.go:714] error starting host: provision: host is not running
	W0729 18:26:19.951381   77394 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 18:26:19.951389   77394 start.go:729] Will try again in 5 seconds ...
	I0729 18:26:19.973786   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Start
	I0729 18:26:19.973923   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring networks are active...
	I0729 18:26:19.974594   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring network default is active
	I0729 18:26:19.974930   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring network mk-embed-certs-409322 is active
	I0729 18:26:19.975500   77627 main.go:141] libmachine: (embed-certs-409322) Getting domain xml...
	I0729 18:26:19.976135   77627 main.go:141] libmachine: (embed-certs-409322) Creating domain...
	I0729 18:26:21.186491   77627 main.go:141] libmachine: (embed-certs-409322) Waiting to get IP...
	I0729 18:26:21.187403   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.187857   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.187924   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.187843   78811 retry.go:31] will retry after 218.694883ms: waiting for machine to come up
	I0729 18:26:21.408404   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.408843   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.408872   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.408795   78811 retry.go:31] will retry after 335.138992ms: waiting for machine to come up
	I0729 18:26:21.745329   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.745805   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.745828   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.745759   78811 retry.go:31] will retry after 317.831297ms: waiting for machine to come up
	I0729 18:26:22.065446   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:22.065985   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:22.066024   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:22.065948   78811 retry.go:31] will retry after 557.945634ms: waiting for machine to come up
	I0729 18:26:22.625624   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:22.626020   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:22.626047   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:22.625967   78811 retry.go:31] will retry after 739.991425ms: waiting for machine to come up
	I0729 18:26:23.368166   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:23.368523   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:23.368549   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:23.368477   78811 retry.go:31] will retry after 878.16479ms: waiting for machine to come up
	I0729 18:26:24.248467   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:24.248871   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:24.248895   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:24.248813   78811 retry.go:31] will retry after 1.022542608s: waiting for machine to come up
	I0729 18:26:24.952911   77394 start.go:360] acquireMachinesLock for no-preload-888056: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:26:25.273470   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:25.273886   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:25.273913   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:25.273829   78811 retry.go:31] will retry after 1.313344307s: waiting for machine to come up
	I0729 18:26:26.589378   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:26.589805   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:26.589852   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:26.589769   78811 retry.go:31] will retry after 1.553795128s: waiting for machine to come up
	I0729 18:26:28.145271   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:28.145680   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:28.145704   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:28.145643   78811 retry.go:31] will retry after 1.859680601s: waiting for machine to come up
	I0729 18:26:30.007588   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:30.007988   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:30.008018   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:30.007937   78811 retry.go:31] will retry after 1.754805493s: waiting for machine to come up
	I0729 18:26:31.764527   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:31.765077   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:31.765107   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:31.765030   78811 retry.go:31] will retry after 2.769383357s: waiting for machine to come up
	I0729 18:26:34.536479   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:34.536972   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:34.537007   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:34.536921   78811 retry.go:31] will retry after 3.355218512s: waiting for machine to come up
	I0729 18:26:39.563371   77859 start.go:364] duration metric: took 3m59.712120998s to acquireMachinesLock for "default-k8s-diff-port-502055"
	I0729 18:26:39.563440   77859 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:39.563452   77859 fix.go:54] fixHost starting: 
	I0729 18:26:39.563871   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:39.563914   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:39.580545   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0729 18:26:39.580962   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:39.581492   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:26:39.581518   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:39.581864   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:39.582096   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:39.582290   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:26:39.583857   77859 fix.go:112] recreateIfNeeded on default-k8s-diff-port-502055: state=Stopped err=<nil>
	I0729 18:26:39.583883   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	W0729 18:26:39.584062   77859 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:39.586281   77859 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-502055" ...
	I0729 18:26:39.587651   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Start
	I0729 18:26:39.587814   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring networks are active...
	I0729 18:26:39.588499   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring network default is active
	I0729 18:26:39.588864   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring network mk-default-k8s-diff-port-502055 is active
	I0729 18:26:39.589616   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Getting domain xml...
	I0729 18:26:39.590433   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Creating domain...
	I0729 18:26:37.896070   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.896640   77627 main.go:141] libmachine: (embed-certs-409322) Found IP for machine: 192.168.39.58
	I0729 18:26:37.896664   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has current primary IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.896670   77627 main.go:141] libmachine: (embed-certs-409322) Reserving static IP address...
	I0729 18:26:37.897129   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "embed-certs-409322", mac: "52:54:00:22:9f:57", ip: "192.168.39.58"} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:37.897157   77627 main.go:141] libmachine: (embed-certs-409322) Reserved static IP address: 192.168.39.58
	I0729 18:26:37.897173   77627 main.go:141] libmachine: (embed-certs-409322) DBG | skip adding static IP to network mk-embed-certs-409322 - found existing host DHCP lease matching {name: "embed-certs-409322", mac: "52:54:00:22:9f:57", ip: "192.168.39.58"}
	I0729 18:26:37.897189   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Getting to WaitForSSH function...
	I0729 18:26:37.897206   77627 main.go:141] libmachine: (embed-certs-409322) Waiting for SSH to be available...
	I0729 18:26:37.899216   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.899595   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:37.899616   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.899785   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Using SSH client type: external
	I0729 18:26:37.899808   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa (-rw-------)
	I0729 18:26:37.899845   77627 main.go:141] libmachine: (embed-certs-409322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:26:37.899858   77627 main.go:141] libmachine: (embed-certs-409322) DBG | About to run SSH command:
	I0729 18:26:37.899872   77627 main.go:141] libmachine: (embed-certs-409322) DBG | exit 0
	I0729 18:26:38.026619   77627 main.go:141] libmachine: (embed-certs-409322) DBG | SSH cmd err, output: <nil>: 
	I0729 18:26:38.027028   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetConfigRaw
	I0729 18:26:38.027621   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:38.030532   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.030963   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.030989   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.031243   77627 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/config.json ...
	I0729 18:26:38.031413   77627 machine.go:94] provisionDockerMachine start ...
	I0729 18:26:38.031437   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:38.031642   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.033867   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.034218   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.034251   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.034380   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.034545   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.034682   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.034807   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.034992   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.035175   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.035185   77627 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:26:38.142565   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:26:38.142595   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.142842   77627 buildroot.go:166] provisioning hostname "embed-certs-409322"
	I0729 18:26:38.142872   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.143071   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.145625   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.145951   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.145974   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.146217   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.146423   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.146577   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.146730   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.146861   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.147046   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.147065   77627 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-409322 && echo "embed-certs-409322" | sudo tee /etc/hostname
	I0729 18:26:38.264341   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-409322
	
	I0729 18:26:38.264368   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.266846   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.267144   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.267171   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.267328   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.267488   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.267660   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.267757   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.267936   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.268106   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.268122   77627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-409322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-409322/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-409322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:26:38.383748   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:38.383779   77627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:26:38.383805   77627 buildroot.go:174] setting up certificates
	I0729 18:26:38.383817   77627 provision.go:84] configureAuth start
	I0729 18:26:38.383827   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.384110   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:38.386936   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.387320   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.387348   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.387508   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.389550   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.389871   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.389910   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.389978   77627 provision.go:143] copyHostCerts
	I0729 18:26:38.390039   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:26:38.390052   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:26:38.390137   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:26:38.390257   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:26:38.390268   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:26:38.390308   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:26:38.390406   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:26:38.390416   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:26:38.390456   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:26:38.390526   77627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.embed-certs-409322 san=[127.0.0.1 192.168.39.58 embed-certs-409322 localhost minikube]
	I0729 18:26:38.903674   77627 provision.go:177] copyRemoteCerts
	I0729 18:26:38.903758   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:26:38.903791   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.906662   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.906984   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.907018   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.907171   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.907360   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.907543   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.907667   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:38.992373   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 18:26:39.016465   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:26:39.039598   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:26:39.062415   77627 provision.go:87] duration metric: took 678.589364ms to configureAuth
	I0729 18:26:39.062443   77627 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:26:39.062622   77627 config.go:182] Loaded profile config "embed-certs-409322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:26:39.062696   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.065308   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.065703   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.065728   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.065902   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.066076   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.066244   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.066403   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.066553   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:39.066743   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:39.066759   77627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:26:39.326153   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:26:39.326176   77627 machine.go:97] duration metric: took 1.29475208s to provisionDockerMachine
	I0729 18:26:39.326186   77627 start.go:293] postStartSetup for "embed-certs-409322" (driver="kvm2")
	I0729 18:26:39.326195   77627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:26:39.326209   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.326603   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:26:39.326637   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.329049   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.329448   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.329476   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.329616   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.329822   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.330022   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.330186   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.413084   77627 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:26:39.417438   77627 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:26:39.417462   77627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:26:39.417535   77627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:26:39.417626   77627 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:26:39.417749   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:26:39.427256   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:39.451330   77627 start.go:296] duration metric: took 125.132889ms for postStartSetup
	I0729 18:26:39.451362   77627 fix.go:56] duration metric: took 19.499949606s for fixHost
	I0729 18:26:39.451380   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.453750   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.454047   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.454072   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.454237   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.454416   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.454570   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.454698   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.454864   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:39.455069   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:39.455080   77627 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:26:39.563211   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277599.531173461
	
	I0729 18:26:39.563238   77627 fix.go:216] guest clock: 1722277599.531173461
	I0729 18:26:39.563248   77627 fix.go:229] Guest: 2024-07-29 18:26:39.531173461 +0000 UTC Remote: 2024-07-29 18:26:39.451365859 +0000 UTC m=+269.697720486 (delta=79.807602ms)
	I0729 18:26:39.563278   77627 fix.go:200] guest clock delta is within tolerance: 79.807602ms
	I0729 18:26:39.563287   77627 start.go:83] releasing machines lock for "embed-certs-409322", held for 19.611902888s
	I0729 18:26:39.563318   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.563562   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:39.566225   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.566549   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.566575   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.566766   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567227   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567378   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567460   77627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:26:39.567501   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.567565   77627 ssh_runner.go:195] Run: cat /version.json
	I0729 18:26:39.567593   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.570113   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570330   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570536   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.570558   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570747   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.570754   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.570776   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570883   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.571004   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.571113   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.571211   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.571330   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.571438   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.571478   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.651235   77627 ssh_runner.go:195] Run: systemctl --version
	I0729 18:26:39.677383   77627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:26:39.824036   77627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:26:39.830027   77627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:26:39.830103   77627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:26:39.845939   77627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:26:39.845963   77627 start.go:495] detecting cgroup driver to use...
	I0729 18:26:39.846019   77627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:26:39.862867   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:26:39.878060   77627 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:26:39.878152   77627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:26:39.892471   77627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:26:39.906690   77627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:26:40.039725   77627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:26:40.201419   77627 docker.go:233] disabling docker service ...
	I0729 18:26:40.201489   77627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:26:40.222454   77627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:26:40.237523   77627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:26:40.371463   77627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:26:40.499676   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:26:40.514068   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:26:40.534051   77627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:26:40.534114   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.545364   77627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:26:40.545458   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.557113   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.568215   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.579433   77627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:26:40.591005   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.601933   77627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.621097   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.631960   77627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:26:40.642308   77627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:26:40.642383   77627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:26:40.656469   77627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:26:40.671251   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:26:40.784289   77627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:26:40.933837   77627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:26:40.933910   77627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:26:40.939031   77627 start.go:563] Will wait 60s for crictl version
	I0729 18:26:40.939086   77627 ssh_runner.go:195] Run: which crictl
	I0729 18:26:40.943166   77627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:26:40.985673   77627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:26:40.985753   77627 ssh_runner.go:195] Run: crio --version
	I0729 18:26:41.013973   77627 ssh_runner.go:195] Run: crio --version
	I0729 18:26:41.046080   77627 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:26:40.822462   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting to get IP...
	I0729 18:26:40.823526   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:40.823948   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:40.824000   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:40.823920   78947 retry.go:31] will retry after 262.026124ms: waiting for machine to come up
	I0729 18:26:41.087492   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.087961   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.087991   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.087913   78947 retry.go:31] will retry after 380.066984ms: waiting for machine to come up
	I0729 18:26:41.469728   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.470215   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.470244   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.470181   78947 retry.go:31] will retry after 293.069239ms: waiting for machine to come up
	I0729 18:26:41.764797   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.765277   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.765303   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.765228   78947 retry.go:31] will retry after 491.247116ms: waiting for machine to come up
	I0729 18:26:42.257741   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.258247   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.258275   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:42.258220   78947 retry.go:31] will retry after 693.832082ms: waiting for machine to come up
	I0729 18:26:42.953375   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.954146   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.954169   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:42.954051   78947 retry.go:31] will retry after 710.005115ms: waiting for machine to come up
	I0729 18:26:43.666068   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:43.666478   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:43.666504   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:43.666438   78947 retry.go:31] will retry after 1.077324053s: waiting for machine to come up
	I0729 18:26:41.047322   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:41.049993   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:41.050394   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:41.050433   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:41.050630   77627 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:26:41.054805   77627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:26:41.066926   77627 kubeadm.go:883] updating cluster {Name:embed-certs-409322 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:26:41.067053   77627 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:26:41.067115   77627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:26:41.103417   77627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:26:41.103489   77627 ssh_runner.go:195] Run: which lz4
	I0729 18:26:41.107793   77627 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:26:41.112161   77627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:26:41.112192   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:26:42.559564   77627 crio.go:462] duration metric: took 1.451801292s to copy over tarball
	I0729 18:26:42.559679   77627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:26:44.759513   77627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199801336s)
	I0729 18:26:44.759543   77627 crio.go:469] duration metric: took 2.199942615s to extract the tarball
	I0729 18:26:44.759554   77627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:26:44.744984   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:44.745450   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:44.745477   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:44.745403   78947 retry.go:31] will retry after 1.064257005s: waiting for machine to come up
	I0729 18:26:45.811414   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:45.811840   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:45.811880   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:45.811799   78947 retry.go:31] will retry after 1.30236943s: waiting for machine to come up
	I0729 18:26:47.116252   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:47.116668   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:47.116728   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:47.116647   78947 retry.go:31] will retry after 1.424333691s: waiting for machine to come up
	I0729 18:26:48.543481   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:48.543945   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:48.543973   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:48.543894   78947 retry.go:31] will retry after 2.106061522s: waiting for machine to come up
	I0729 18:26:44.798609   77627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:26:44.848236   77627 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:26:44.848257   77627 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:26:44.848265   77627 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.30.3 crio true true} ...
	I0729 18:26:44.848355   77627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-409322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:26:44.848415   77627 ssh_runner.go:195] Run: crio config
	I0729 18:26:44.901558   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:26:44.901584   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:26:44.901597   77627 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:26:44.901625   77627 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-409322 NodeName:embed-certs-409322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:26:44.901807   77627 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-409322"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:26:44.901875   77627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:26:44.912290   77627 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:26:44.912351   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:26:44.921801   77627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 18:26:44.940473   77627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:26:44.958445   77627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 18:26:44.976890   77627 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I0729 18:26:44.980974   77627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:26:44.994793   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:26:45.120453   77627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:26:45.138398   77627 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322 for IP: 192.168.39.58
	I0729 18:26:45.138419   77627 certs.go:194] generating shared ca certs ...
	I0729 18:26:45.138438   77627 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:26:45.138592   77627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:26:45.138643   77627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:26:45.138657   77627 certs.go:256] generating profile certs ...
	I0729 18:26:45.138751   77627 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/client.key
	I0729 18:26:45.138823   77627 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.key.4af4a6b9
	I0729 18:26:45.138889   77627 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.key
	I0729 18:26:45.139034   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:26:45.139074   77627 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:26:45.139088   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:26:45.139122   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:26:45.139161   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:26:45.139200   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:26:45.139305   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:45.139979   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:26:45.177194   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:26:45.206349   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:26:45.242291   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:26:45.277062   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 18:26:45.312447   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:26:45.345482   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:26:45.369151   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:26:45.394521   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:26:45.418579   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:26:45.443252   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:26:45.466770   77627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:26:45.484159   77627 ssh_runner.go:195] Run: openssl version
	I0729 18:26:45.490045   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:26:45.501166   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.505930   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.505988   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.511926   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:26:45.522860   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:26:45.533560   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.538411   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.538474   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.544485   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:26:45.555603   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:26:45.566407   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.570892   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.570944   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.576555   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:26:45.587780   77627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:26:45.592689   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:26:45.598981   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:26:45.604952   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:26:45.611225   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:26:45.617506   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:26:45.623744   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:26:45.629836   77627 kubeadm.go:392] StartCluster: {Name:embed-certs-409322 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:26:45.629947   77627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:26:45.630003   77627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:26:45.667768   77627 cri.go:89] found id: ""
	I0729 18:26:45.667853   77627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:26:45.678703   77627 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:26:45.678724   77627 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:26:45.678772   77627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:26:45.691979   77627 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:26:45.693237   77627 kubeconfig.go:125] found "embed-certs-409322" server: "https://192.168.39.58:8443"
	I0729 18:26:45.696093   77627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:26:45.708981   77627 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I0729 18:26:45.709017   77627 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:26:45.709030   77627 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:26:45.709088   77627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:26:45.748738   77627 cri.go:89] found id: ""
	I0729 18:26:45.748817   77627 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:26:45.775148   77627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:26:45.786631   77627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:26:45.786651   77627 kubeadm.go:157] found existing configuration files:
	
	I0729 18:26:45.786701   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:26:45.799453   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:26:45.799507   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:26:45.809691   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:26:45.819592   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:26:45.819638   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:26:45.832072   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:26:45.843769   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:26:45.843817   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:26:45.854649   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:26:45.863448   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:26:45.863504   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:26:45.872399   77627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:26:45.881992   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:46.012679   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.143076   77627 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.130359187s)
	I0729 18:26:47.143112   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.370854   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.446808   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.550087   77627 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:26:47.550191   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.050502   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.550499   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.608713   77627 api_server.go:72] duration metric: took 1.058625786s to wait for apiserver process to appear ...
	I0729 18:26:48.608745   77627 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:26:48.608773   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:51.829925   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:26:51.829963   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:26:51.829979   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:51.843474   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:26:51.843503   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:26:52.109882   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:52.117387   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:26:52.117415   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:26:52.608863   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:52.613809   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:26:52.613840   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:26:53.109430   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:53.115353   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I0729 18:26:53.122373   77627 api_server.go:141] control plane version: v1.30.3
	I0729 18:26:53.122411   77627 api_server.go:131] duration metric: took 4.513658045s to wait for apiserver health ...
	I0729 18:26:53.122420   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:26:53.122426   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:26:53.123807   77627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:26:50.651329   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:50.651724   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:50.651753   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:50.651678   78947 retry.go:31] will retry after 3.358167933s: waiting for machine to come up
	I0729 18:26:54.014102   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:54.014543   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:54.014576   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:54.014495   78947 retry.go:31] will retry after 4.372189125s: waiting for machine to come up
	I0729 18:26:53.124953   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:26:53.140970   77627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:26:53.179660   77627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:26:53.193885   77627 system_pods.go:59] 8 kube-system pods found
	I0729 18:26:53.193921   77627 system_pods.go:61] "coredns-7db6d8ff4d-vxvfc" [da2fd5a1-f57f-4374-99ee-9017e228176f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:26:53.193932   77627 system_pods.go:61] "etcd-embed-certs-409322" [3eca462f-6156-4858-a886-30d0d32faa35] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:26:53.193944   77627 system_pods.go:61] "kube-apiserver-embed-certs-409322" [4c6473c7-d7b8-4513-b800-7cab08748d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:26:53.193953   77627 system_pods.go:61] "kube-controller-manager-embed-certs-409322" [2dc47da0-3d24-49d8-91ae-13074468b423] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:26:53.193961   77627 system_pods.go:61] "kube-proxy-zf5jf" [a0b6fd82-d0b1-4821-a668-4cb6420b4860] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 18:26:53.193969   77627 system_pods.go:61] "kube-scheduler-embed-certs-409322" [ab422567-58e6-4f22-a7cf-391b35cc386c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:26:53.193977   77627 system_pods.go:61] "metrics-server-569cc877fc-flh27" [83d6c69c-200d-4ce2-80e9-b83ff5b6ebe9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:26:53.193989   77627 system_pods.go:61] "storage-provisioner" [73ff548f-26c3-4442-a9bd-bdac45261476] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 18:26:53.194002   77627 system_pods.go:74] duration metric: took 14.320361ms to wait for pod list to return data ...
	I0729 18:26:53.194014   77627 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:26:53.197826   77627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:26:53.197858   77627 node_conditions.go:123] node cpu capacity is 2
	I0729 18:26:53.197870   77627 node_conditions.go:105] duration metric: took 3.850077ms to run NodePressure ...
	I0729 18:26:53.197884   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:53.467868   77627 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:26:53.471886   77627 kubeadm.go:739] kubelet initialised
	I0729 18:26:53.471905   77627 kubeadm.go:740] duration metric: took 4.016417ms waiting for restarted kubelet to initialise ...
	I0729 18:26:53.471912   77627 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:26:53.476695   77627 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.480449   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.480481   77627 pod_ready.go:81] duration metric: took 3.766ms for pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.480491   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.480501   77627 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.484712   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "etcd-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.484739   77627 pod_ready.go:81] duration metric: took 4.228077ms for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.484750   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "etcd-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.484759   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.488510   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.488532   77627 pod_ready.go:81] duration metric: took 3.76371ms for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.488539   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.488545   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:58.387940   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.388358   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Found IP for machine: 192.168.61.244
	I0729 18:26:58.388383   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has current primary IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.388396   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Reserving static IP address...
	I0729 18:26:58.388794   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-502055", mac: "52:54:00:ae:63:e1", ip: "192.168.61.244"} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.388826   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Reserved static IP address: 192.168.61.244
	I0729 18:26:58.388848   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | skip adding static IP to network mk-default-k8s-diff-port-502055 - found existing host DHCP lease matching {name: "default-k8s-diff-port-502055", mac: "52:54:00:ae:63:e1", ip: "192.168.61.244"}
	I0729 18:26:58.388873   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for SSH to be available...
	I0729 18:26:58.388894   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Getting to WaitForSSH function...
	I0729 18:26:58.390937   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.391281   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.391319   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.391381   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Using SSH client type: external
	I0729 18:26:58.391408   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa (-rw-------)
	I0729 18:26:58.391457   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:26:58.391490   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | About to run SSH command:
	I0729 18:26:58.391511   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | exit 0
	I0729 18:26:58.518399   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | SSH cmd err, output: <nil>: 
	I0729 18:26:58.518782   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetConfigRaw
	I0729 18:26:58.519492   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:58.522245   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.522580   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.522615   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.522862   77859 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/config.json ...
	I0729 18:26:58.523037   77859 machine.go:94] provisionDockerMachine start ...
	I0729 18:26:58.523053   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:58.523258   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.525654   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.525998   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.526018   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.526185   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.526351   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.526555   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.526705   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.526874   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.527066   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.527079   77859 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:26:58.635267   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:26:58.635302   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.635524   77859 buildroot.go:166] provisioning hostname "default-k8s-diff-port-502055"
	I0729 18:26:58.635550   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.635789   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.638770   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.639235   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.639265   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.639371   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.639564   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.639729   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.639865   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.640048   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.640227   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.640245   77859 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-502055 && echo "default-k8s-diff-port-502055" | sudo tee /etc/hostname
	I0729 18:26:58.760577   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-502055
	
	I0729 18:26:58.760603   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.763294   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.763591   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.763625   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.763766   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.763970   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.764159   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.764311   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.764480   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.764641   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.764659   77859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-502055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-502055/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-502055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:26:58.879366   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:58.879400   77859 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:26:58.879440   77859 buildroot.go:174] setting up certificates
	I0729 18:26:58.879451   77859 provision.go:84] configureAuth start
	I0729 18:26:58.879463   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.879735   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:58.882335   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.882652   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.882680   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.882848   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.885023   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.885313   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.885339   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.885433   77859 provision.go:143] copyHostCerts
	I0729 18:26:58.885479   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:26:58.885488   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:26:58.885544   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:26:58.885633   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:26:58.885641   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:26:58.885660   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:26:58.885709   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:26:58.885716   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:26:58.885733   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:26:58.885783   77859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-502055 san=[127.0.0.1 192.168.61.244 default-k8s-diff-port-502055 localhost minikube]
	I0729 18:26:59.130657   77859 provision.go:177] copyRemoteCerts
	I0729 18:26:59.130724   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:26:59.130749   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.133536   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.133898   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.133922   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.134079   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.134260   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.134421   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.134530   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.216614   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 18:26:59.240540   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:26:59.267350   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:26:59.294003   77859 provision.go:87] duration metric: took 414.539559ms to configureAuth
	I0729 18:26:59.294032   77859 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:26:59.294222   77859 config.go:182] Loaded profile config "default-k8s-diff-port-502055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:26:59.294293   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.296911   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.297285   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.297311   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.297450   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.297656   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.297804   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.297935   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.298102   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:59.298265   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:59.298281   77859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:26:59.557084   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:26:59.557131   77859 machine.go:97] duration metric: took 1.034080964s to provisionDockerMachine
	I0729 18:26:59.557148   77859 start.go:293] postStartSetup for "default-k8s-diff-port-502055" (driver="kvm2")
	I0729 18:26:59.557165   77859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:26:59.557191   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.557496   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:26:59.557529   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.559962   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.560255   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.560276   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.560461   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.560635   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.560798   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.560953   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.645623   77859 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:26:59.650416   77859 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:26:59.650447   77859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:26:59.650531   77859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:26:59.650624   77859 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:26:59.650730   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:26:59.660864   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:59.685728   77859 start.go:296] duration metric: took 128.564534ms for postStartSetup
	I0729 18:26:59.685767   77859 fix.go:56] duration metric: took 20.122314731s for fixHost
	I0729 18:26:59.685791   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.688401   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.688773   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.688801   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.688978   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.689157   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.689293   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.689401   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.689551   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:59.689712   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:59.689722   77859 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:26:55.494570   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:26:57.495784   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:26:59.799712   78080 start.go:364] duration metric: took 4m12.475660562s to acquireMachinesLock for "old-k8s-version-386663"
	I0729 18:26:59.799786   78080 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:59.799796   78080 fix.go:54] fixHost starting: 
	I0729 18:26:59.800184   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:59.800215   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:59.816885   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37963
	I0729 18:26:59.817336   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:59.817822   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:26:59.817851   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:59.818283   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:59.818505   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:26:59.818671   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetState
	I0729 18:26:59.820232   78080 fix.go:112] recreateIfNeeded on old-k8s-version-386663: state=Stopped err=<nil>
	I0729 18:26:59.820254   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	W0729 18:26:59.820426   78080 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:59.822140   78080 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-386663" ...
	I0729 18:26:59.799573   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277619.755982716
	
	I0729 18:26:59.799603   77859 fix.go:216] guest clock: 1722277619.755982716
	I0729 18:26:59.799614   77859 fix.go:229] Guest: 2024-07-29 18:26:59.755982716 +0000 UTC Remote: 2024-07-29 18:26:59.685771603 +0000 UTC m=+259.980298680 (delta=70.211113ms)
	I0729 18:26:59.799637   77859 fix.go:200] guest clock delta is within tolerance: 70.211113ms
	I0729 18:26:59.799641   77859 start.go:83] releasing machines lock for "default-k8s-diff-port-502055", held for 20.236230068s
	I0729 18:26:59.799672   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.799944   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:59.802636   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.802983   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.803013   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.803248   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.803740   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.803927   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.804023   77859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:26:59.804070   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.804193   77859 ssh_runner.go:195] Run: cat /version.json
	I0729 18:26:59.804229   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.807037   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807117   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807395   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.807435   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807528   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.807547   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.807565   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807708   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.807717   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.807910   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.807936   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.808043   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.808098   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.808244   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.920371   77859 ssh_runner.go:195] Run: systemctl --version
	I0729 18:26:59.926620   77859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:00.072161   77859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:00.079273   77859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:00.079340   77859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:00.096528   77859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:00.096550   77859 start.go:495] detecting cgroup driver to use...
	I0729 18:27:00.096610   77859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:00.113690   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:00.129058   77859 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:00.129126   77859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:00.143930   77859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:00.158085   77859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:00.296398   77859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:00.482313   77859 docker.go:233] disabling docker service ...
	I0729 18:27:00.482459   77859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:00.501504   77859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:00.520932   77859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:00.657805   77859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:00.792064   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:00.807790   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:00.827373   77859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:27:00.827423   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.838281   77859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:00.838340   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.849533   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.860820   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.872359   77859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:00.883904   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.895589   77859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.914639   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.926278   77859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:00.936329   77859 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:00.936383   77859 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:00.951219   77859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:00.966530   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:01.086665   77859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:01.233627   77859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:01.233703   77859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:01.241055   77859 start.go:563] Will wait 60s for crictl version
	I0729 18:27:01.241122   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:27:01.244875   77859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:01.284013   77859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:01.284103   77859 ssh_runner.go:195] Run: crio --version
	I0729 18:27:01.315493   77859 ssh_runner.go:195] Run: crio --version
	I0729 18:27:01.348781   77859 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:26:59.823421   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .Start
	I0729 18:26:59.823575   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring networks are active...
	I0729 18:26:59.824264   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring network default is active
	I0729 18:26:59.824641   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring network mk-old-k8s-version-386663 is active
	I0729 18:26:59.825024   78080 main.go:141] libmachine: (old-k8s-version-386663) Getting domain xml...
	I0729 18:26:59.825885   78080 main.go:141] libmachine: (old-k8s-version-386663) Creating domain...
	I0729 18:27:01.104265   78080 main.go:141] libmachine: (old-k8s-version-386663) Waiting to get IP...
	I0729 18:27:01.105349   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.105790   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.105836   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.105761   79098 retry.go:31] will retry after 308.255094ms: waiting for machine to come up
	I0729 18:27:01.415431   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.415999   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.416030   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.415952   79098 retry.go:31] will retry after 236.525723ms: waiting for machine to come up
	I0729 18:27:01.654767   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.655279   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.655312   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.655247   79098 retry.go:31] will retry after 311.010394ms: waiting for machine to come up
	I0729 18:27:01.967850   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.968374   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.968404   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.968333   79098 retry.go:31] will retry after 468.477549ms: waiting for machine to come up
	I0729 18:27:01.350059   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:27:01.352945   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:01.353398   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:01.353429   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:01.353630   77859 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:01.357955   77859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:01.371879   77859 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-502055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:01.372034   77859 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:27:01.372100   77859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:01.412356   77859 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:27:01.412423   77859 ssh_runner.go:195] Run: which lz4
	I0729 18:27:01.417768   77859 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:27:01.422809   77859 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:27:01.422836   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:27:02.909800   77859 crio.go:462] duration metric: took 1.492088664s to copy over tarball
	I0729 18:27:02.909868   77859 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:26:59.995351   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:01.999130   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:04.012357   77627 pod_ready.go:92] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.012385   77627 pod_ready.go:81] duration metric: took 10.523832262s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.012398   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zf5jf" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.025409   77627 pod_ready.go:92] pod "kube-proxy-zf5jf" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.025448   77627 pod_ready.go:81] duration metric: took 13.042254ms for pod "kube-proxy-zf5jf" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.025461   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.036057   77627 pod_ready.go:92] pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.036078   77627 pod_ready.go:81] duration metric: took 10.608531ms for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.036090   77627 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:02.438066   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:02.438657   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:02.438686   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:02.438618   79098 retry.go:31] will retry after 601.056921ms: waiting for machine to come up
	I0729 18:27:03.041582   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:03.042097   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:03.042127   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:03.042040   79098 retry.go:31] will retry after 712.049848ms: waiting for machine to come up
	I0729 18:27:03.755536   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:03.756010   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:03.756040   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:03.755988   79098 retry.go:31] will retry after 1.092318096s: waiting for machine to come up
	I0729 18:27:04.849745   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:04.850202   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:04.850226   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:04.850147   79098 retry.go:31] will retry after 903.54457ms: waiting for machine to come up
	I0729 18:27:05.754781   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:05.755193   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:05.755218   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:05.755157   79098 retry.go:31] will retry after 1.693512671s: waiting for machine to come up
	I0729 18:27:05.188101   77859 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.27820184s)
	I0729 18:27:05.188132   77859 crio.go:469] duration metric: took 2.278304723s to extract the tarball
	I0729 18:27:05.188140   77859 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:27:05.227453   77859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:05.274530   77859 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:27:05.274560   77859 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:27:05.274571   77859 kubeadm.go:934] updating node { 192.168.61.244 8444 v1.30.3 crio true true} ...
	I0729 18:27:05.274708   77859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-502055 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:05.274788   77859 ssh_runner.go:195] Run: crio config
	I0729 18:27:05.320697   77859 cni.go:84] Creating CNI manager for ""
	I0729 18:27:05.320725   77859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:05.320741   77859 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:05.320774   77859 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.244 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-502055 NodeName:default-k8s-diff-port-502055 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:27:05.320948   77859 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.244
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-502055"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:05.321028   77859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:27:05.331541   77859 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:05.331609   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:05.341433   77859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 18:27:05.358696   77859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:27:05.376531   77859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 18:27:05.394349   77859 ssh_runner.go:195] Run: grep 192.168.61.244	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:05.398156   77859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:05.411839   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:05.561467   77859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:05.583184   77859 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055 for IP: 192.168.61.244
	I0729 18:27:05.583209   77859 certs.go:194] generating shared ca certs ...
	I0729 18:27:05.583251   77859 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:05.583406   77859 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:05.583460   77859 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:05.583473   77859 certs.go:256] generating profile certs ...
	I0729 18:27:05.583577   77859 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/client.key
	I0729 18:27:05.583642   77859 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.key.2edc4448
	I0729 18:27:05.583692   77859 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.key
	I0729 18:27:05.583835   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:05.583872   77859 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:05.583886   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:05.583917   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:05.583957   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:05.583991   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:05.584048   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:05.584726   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:05.624996   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:05.670153   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:05.715354   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:05.743807   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 18:27:05.777366   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:05.802152   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:05.826974   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:27:05.850417   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:05.873185   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:05.899387   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:05.927963   77859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:05.947817   77859 ssh_runner.go:195] Run: openssl version
	I0729 18:27:05.955635   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:05.969765   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.974559   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.974606   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.980557   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:05.991819   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:06.004961   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.009999   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.010074   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.016045   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:06.027698   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:06.039648   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.045057   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.045130   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.051127   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:06.062761   77859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:06.068832   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:06.076652   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:06.084517   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:06.091125   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:06.097346   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:06.103428   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:06.109312   77859 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-502055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:06.109403   77859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:06.109440   77859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:06.153439   77859 cri.go:89] found id: ""
	I0729 18:27:06.153528   77859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:06.166412   77859 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:06.166434   77859 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:06.166486   77859 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:06.183064   77859 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:06.184168   77859 kubeconfig.go:125] found "default-k8s-diff-port-502055" server: "https://192.168.61.244:8444"
	I0729 18:27:06.186283   77859 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:06.197418   77859 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.244
	I0729 18:27:06.197444   77859 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:06.197454   77859 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:06.197506   77859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:06.237753   77859 cri.go:89] found id: ""
	I0729 18:27:06.237839   77859 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:06.257323   77859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:06.269157   77859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:06.269176   77859 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:06.269229   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 18:27:06.279313   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:06.279369   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:06.292141   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 18:27:06.303961   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:06.304028   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:06.316051   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 18:27:06.328004   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:06.328064   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:06.340357   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 18:27:06.352021   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:06.352068   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:06.364479   77859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:06.375313   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:06.498692   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:07.853845   77859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.355105254s)
	I0729 18:27:07.853882   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.069616   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.144574   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.225236   77859 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:08.225336   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:08.725789   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:09.226271   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:09.270268   77859 api_server.go:72] duration metric: took 1.045028259s to wait for apiserver process to appear ...
	I0729 18:27:09.270298   77859 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:27:09.270320   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:09.270877   77859 api_server.go:269] stopped: https://192.168.61.244:8444/healthz: Get "https://192.168.61.244:8444/healthz": dial tcp 192.168.61.244:8444: connect: connection refused
	I0729 18:27:06.043838   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:08.044382   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:07.451087   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:07.451659   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:07.451688   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:07.451607   79098 retry.go:31] will retry after 1.734643072s: waiting for machine to come up
	I0729 18:27:09.188407   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:09.188963   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:09.188997   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:09.188900   79098 retry.go:31] will retry after 2.010973572s: waiting for machine to come up
	I0729 18:27:11.201171   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:11.201586   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:11.201620   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:11.201535   79098 retry.go:31] will retry after 3.178533437s: waiting for machine to come up
	I0729 18:27:09.771273   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.506136   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:27:12.506166   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:27:12.506179   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.518847   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:27:12.518881   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:27:12.771281   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.775798   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:27:12.775832   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:27:13.270383   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:13.281935   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:27:13.281975   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:27:13.770440   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:13.776004   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 200:
	ok
	I0729 18:27:13.783210   77859 api_server.go:141] control plane version: v1.30.3
	I0729 18:27:13.783237   77859 api_server.go:131] duration metric: took 4.512933596s to wait for apiserver health ...
	I0729 18:27:13.783247   77859 cni.go:84] Creating CNI manager for ""
	I0729 18:27:13.783253   77859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:13.785148   77859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:27:13.786485   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:27:13.814986   77859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:27:13.860557   77859 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:27:13.872823   77859 system_pods.go:59] 8 kube-system pods found
	I0729 18:27:13.872864   77859 system_pods.go:61] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:27:13.872871   77859 system_pods.go:61] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:27:13.872879   77859 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:27:13.872885   77859 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:27:13.872891   77859 system_pods.go:61] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 18:27:13.872898   77859 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:27:13.872903   77859 system_pods.go:61] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:27:13.872912   77859 system_pods.go:61] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 18:27:13.872920   77859 system_pods.go:74] duration metric: took 12.342162ms to wait for pod list to return data ...
	I0729 18:27:13.872929   77859 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:27:13.879353   77859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:27:13.879384   77859 node_conditions.go:123] node cpu capacity is 2
	I0729 18:27:13.879396   77859 node_conditions.go:105] duration metric: took 6.459994ms to run NodePressure ...
	I0729 18:27:13.879416   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:14.172203   77859 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:27:14.178467   77859 kubeadm.go:739] kubelet initialised
	I0729 18:27:14.178490   77859 kubeadm.go:740] duration metric: took 6.259862ms waiting for restarted kubelet to initialise ...
	I0729 18:27:14.178499   77859 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:14.184872   77859 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.190847   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.190871   77859 pod_ready.go:81] duration metric: took 5.974917ms for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.190879   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.190886   77859 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.195570   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.195593   77859 pod_ready.go:81] duration metric: took 4.699847ms for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.195603   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.195610   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.199460   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.199480   77859 pod_ready.go:81] duration metric: took 3.863218ms for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.199489   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.199494   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.264725   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.264759   77859 pod_ready.go:81] duration metric: took 65.256372ms for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.264774   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.264781   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.664064   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-proxy-cgdm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.664089   77859 pod_ready.go:81] duration metric: took 399.300184ms for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.664100   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-proxy-cgdm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.664109   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:10.044797   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:12.543553   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:15.064029   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.064059   77859 pod_ready.go:81] duration metric: took 399.939139ms for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:15.064074   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.064082   77859 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:15.464538   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.464564   77859 pod_ready.go:81] duration metric: took 400.472397ms for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:15.464584   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.464592   77859 pod_ready.go:38] duration metric: took 1.286083847s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:15.464609   77859 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:27:15.478197   77859 ops.go:34] apiserver oom_adj: -16
	I0729 18:27:15.478220   77859 kubeadm.go:597] duration metric: took 9.311779975s to restartPrimaryControlPlane
	I0729 18:27:15.478229   77859 kubeadm.go:394] duration metric: took 9.368934157s to StartCluster
	I0729 18:27:15.478247   77859 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:15.478311   77859 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:27:15.479920   77859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:15.480159   77859 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:27:15.480244   77859 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:27:15.480322   77859 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-502055"
	I0729 18:27:15.480355   77859 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-502055"
	I0729 18:27:15.480356   77859 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-502055"
	W0729 18:27:15.480368   77859 addons.go:243] addon storage-provisioner should already be in state true
	I0729 18:27:15.480371   77859 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-502055"
	I0729 18:27:15.480396   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.480397   77859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-502055"
	I0729 18:27:15.480402   77859 config.go:182] Loaded profile config "default-k8s-diff-port-502055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:27:15.480415   77859 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-502055"
	W0729 18:27:15.480426   77859 addons.go:243] addon metrics-server should already be in state true
	I0729 18:27:15.480460   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.480709   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480723   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480738   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.480738   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.480914   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480943   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.482004   77859 out.go:177] * Verifying Kubernetes components...
	I0729 18:27:15.483504   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:15.495748   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35469
	I0729 18:27:15.495965   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43251
	I0729 18:27:15.495977   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
	I0729 18:27:15.496147   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496324   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496433   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496604   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496622   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496760   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496778   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496914   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496930   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496982   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497086   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497219   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497644   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.497672   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.498076   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.498408   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.498449   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.501769   77859 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-502055"
	W0729 18:27:15.501790   77859 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:27:15.501814   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.502132   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.502163   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.516862   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0729 18:27:15.517070   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0729 18:27:15.517336   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.517525   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.517845   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.517877   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.518255   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.518356   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.518418   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.518657   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.518793   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.519009   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.520045   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I0729 18:27:15.520489   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.520613   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.520785   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.520962   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.520979   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.521295   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.521697   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.521712   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.522950   77859 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:15.522950   77859 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:27:15.524246   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:27:15.524268   77859 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:27:15.524291   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.524355   77859 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:27:15.524370   77859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:27:15.524388   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.527946   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528008   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528609   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.528645   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528678   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.528691   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528723   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.528939   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.528953   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.529101   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.529150   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.529218   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.529524   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.529716   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.539969   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41273
	I0729 18:27:15.540410   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.540887   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.540913   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.541351   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.541675   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.543494   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.543728   77859 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:27:15.543744   77859 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:27:15.543762   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.546809   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.547225   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.547250   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.547405   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.547595   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.547736   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.547859   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.662741   77859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:15.681179   77859 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-502055" to be "Ready" ...
	I0729 18:27:15.754691   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:27:15.767498   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:27:15.767515   77859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:27:15.781857   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:27:15.801619   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:27:15.801645   77859 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:27:15.823663   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:27:15.823690   77859 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:27:15.847827   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:27:16.818178   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.063432468s)
	I0729 18:27:16.818180   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.036288517s)
	I0729 18:27:16.818268   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818234   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818290   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818307   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818677   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.818680   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.818694   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.818710   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.818723   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818724   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.818735   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818740   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.818755   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818766   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818989   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.819000   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.819004   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.819017   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.819014   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.824028   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.824047   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.824268   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.824292   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.877321   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029455089s)
	I0729 18:27:16.877378   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.877393   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.877718   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.877767   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.877790   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.877801   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.878030   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.878047   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.878061   77859 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-502055"
	I0729 18:27:16.879704   77859 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 18:27:14.381238   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:14.381648   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:14.381677   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:14.381609   79098 retry.go:31] will retry after 4.005160817s: waiting for machine to come up
	I0729 18:27:16.880972   77859 addons.go:510] duration metric: took 1.400728317s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 18:27:17.685480   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:19.687853   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.042487   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:17.043250   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:19.045374   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:19.859418   77394 start.go:364] duration metric: took 54.906462088s to acquireMachinesLock for "no-preload-888056"
	I0729 18:27:19.859470   77394 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:27:19.859478   77394 fix.go:54] fixHost starting: 
	I0729 18:27:19.859850   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:19.859896   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:19.876798   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46323
	I0729 18:27:19.877254   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:19.877674   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:27:19.877709   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:19.878087   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:19.878257   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:19.878399   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:27:19.879875   77394 fix.go:112] recreateIfNeeded on no-preload-888056: state=Stopped err=<nil>
	I0729 18:27:19.879909   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	W0729 18:27:19.880054   77394 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:27:19.882098   77394 out.go:177] * Restarting existing kvm2 VM for "no-preload-888056" ...
	I0729 18:27:18.388470   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.388971   78080 main.go:141] libmachine: (old-k8s-version-386663) Found IP for machine: 192.168.50.70
	I0729 18:27:18.388989   78080 main.go:141] libmachine: (old-k8s-version-386663) Reserving static IP address...
	I0729 18:27:18.388999   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has current primary IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.389431   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "old-k8s-version-386663", mac: "52:54:00:78:b6:ac", ip: "192.168.50.70"} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.389459   78080 main.go:141] libmachine: (old-k8s-version-386663) Reserved static IP address: 192.168.50.70
	I0729 18:27:18.389477   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | skip adding static IP to network mk-old-k8s-version-386663 - found existing host DHCP lease matching {name: "old-k8s-version-386663", mac: "52:54:00:78:b6:ac", ip: "192.168.50.70"}
	I0729 18:27:18.389493   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Getting to WaitForSSH function...
	I0729 18:27:18.389515   78080 main.go:141] libmachine: (old-k8s-version-386663) Waiting for SSH to be available...
	I0729 18:27:18.391523   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.391916   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.391941   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.392062   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using SSH client type: external
	I0729 18:27:18.392088   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa (-rw-------)
	I0729 18:27:18.392119   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:27:18.392134   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | About to run SSH command:
	I0729 18:27:18.392150   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | exit 0
	I0729 18:27:18.514735   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | SSH cmd err, output: <nil>: 
	I0729 18:27:18.515114   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetConfigRaw
	I0729 18:27:18.515736   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:18.518194   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.518615   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.518651   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.518879   78080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json ...
	I0729 18:27:18.519090   78080 machine.go:94] provisionDockerMachine start ...
	I0729 18:27:18.519113   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:18.519322   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.521434   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.521824   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.521846   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.521996   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.522181   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.522349   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.522514   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.522724   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.522960   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.522975   78080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:27:18.622960   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:27:18.622989   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.623249   78080 buildroot.go:166] provisioning hostname "old-k8s-version-386663"
	I0729 18:27:18.623277   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.623461   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.626009   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.626376   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.626406   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.626649   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.626876   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.627141   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.627301   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.627474   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.627669   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.627683   78080 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386663 && echo "old-k8s-version-386663" | sudo tee /etc/hostname
	I0729 18:27:18.748137   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386663
	
	I0729 18:27:18.748165   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.751546   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.751882   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.751916   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.752086   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.752270   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.752409   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.752550   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.752747   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.753004   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.753031   78080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386663/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:27:18.863358   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:27:18.863389   78080 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:27:18.863415   78080 buildroot.go:174] setting up certificates
	I0729 18:27:18.863425   78080 provision.go:84] configureAuth start
	I0729 18:27:18.863436   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.863754   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:18.866285   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.866641   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.866668   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.866797   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.868886   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.869241   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.869270   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.869404   78080 provision.go:143] copyHostCerts
	I0729 18:27:18.869459   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:27:18.869468   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:27:18.869522   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:27:18.869614   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:27:18.869624   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:27:18.869652   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:27:18.869740   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:27:18.869750   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:27:18.869772   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:27:18.869833   78080 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386663 san=[127.0.0.1 192.168.50.70 localhost minikube old-k8s-version-386663]
	I0729 18:27:19.142743   78080 provision.go:177] copyRemoteCerts
	I0729 18:27:19.142808   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:27:19.142842   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.145484   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.145843   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.145872   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.146092   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.146334   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.146532   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.146692   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.230725   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:27:19.255862   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 18:27:19.290922   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:27:19.317519   78080 provision.go:87] duration metric: took 454.081583ms to configureAuth
	I0729 18:27:19.317549   78080 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:27:19.317766   78080 config.go:182] Loaded profile config "old-k8s-version-386663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:27:19.317854   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.320636   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.321074   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.321110   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.321346   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.321603   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.321782   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.321959   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.322158   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:19.322336   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:19.322351   78080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:27:19.626713   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:27:19.626737   78080 machine.go:97] duration metric: took 1.107631867s to provisionDockerMachine
	I0729 18:27:19.626749   78080 start.go:293] postStartSetup for "old-k8s-version-386663" (driver="kvm2")
	I0729 18:27:19.626763   78080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:27:19.626834   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.627168   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:27:19.627197   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.629389   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.629751   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.629782   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.629907   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.630102   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.630302   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.630460   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.709702   78080 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:27:19.713879   78080 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:27:19.713913   78080 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:27:19.713994   78080 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:27:19.714093   78080 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:27:19.714215   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:27:19.725226   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:19.751727   78080 start.go:296] duration metric: took 124.964072ms for postStartSetup
	I0729 18:27:19.751767   78080 fix.go:56] duration metric: took 19.951972224s for fixHost
	I0729 18:27:19.751796   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.754481   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.754843   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.754877   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.755107   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.755321   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.755482   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.755663   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.755829   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:19.756012   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:19.756024   78080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:27:19.859279   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277639.831700968
	
	I0729 18:27:19.859302   78080 fix.go:216] guest clock: 1722277639.831700968
	I0729 18:27:19.859309   78080 fix.go:229] Guest: 2024-07-29 18:27:19.831700968 +0000 UTC Remote: 2024-07-29 18:27:19.751770935 +0000 UTC m=+272.565043390 (delta=79.930033ms)
	I0729 18:27:19.859327   78080 fix.go:200] guest clock delta is within tolerance: 79.930033ms
	I0729 18:27:19.859332   78080 start.go:83] releasing machines lock for "old-k8s-version-386663", held for 20.059569122s
	I0729 18:27:19.859353   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.859661   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:19.862741   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.863225   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.863261   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.863449   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864092   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864309   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864392   78080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:27:19.864432   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.864547   78080 ssh_runner.go:195] Run: cat /version.json
	I0729 18:27:19.864572   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.867636   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.867798   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868019   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.868044   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868178   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.868330   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.868356   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868360   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.868500   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.868587   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.868667   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.868754   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.868910   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.869046   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.947441   78080 ssh_runner.go:195] Run: systemctl --version
	I0729 18:27:19.967868   78080 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:20.114336   78080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:20.121716   78080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:20.121793   78080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:20.143272   78080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:20.143298   78080 start.go:495] detecting cgroup driver to use...
	I0729 18:27:20.143385   78080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:20.162433   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:20.178310   78080 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:20.178397   78080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:20.194091   78080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:20.209796   78080 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:20.341466   78080 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:20.514215   78080 docker.go:233] disabling docker service ...
	I0729 18:27:20.514338   78080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:20.531018   78080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:20.551839   78080 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:20.680430   78080 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:20.834782   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:20.852454   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:20.874962   78080 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 18:27:20.875017   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.886550   78080 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:20.886619   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.899344   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.914254   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.927308   78080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:20.939807   78080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:20.951648   78080 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:20.951738   78080 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:20.967918   78080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:20.979872   78080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:21.125398   78080 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:21.290736   78080 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:21.290816   78080 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:21.296922   78080 start.go:563] Will wait 60s for crictl version
	I0729 18:27:21.296987   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:21.302200   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:21.350783   78080 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:21.350919   78080 ssh_runner.go:195] Run: crio --version
	I0729 18:27:21.391539   78080 ssh_runner.go:195] Run: crio --version
	I0729 18:27:21.441225   78080 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 18:27:21.442583   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:21.446238   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:21.446728   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:21.446756   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:21.446988   78080 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:21.452537   78080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:21.470394   78080 kubeadm.go:883] updating cluster {Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:21.470555   78080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:27:21.470610   78080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:21.531670   78080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:27:21.531742   78080 ssh_runner.go:195] Run: which lz4
	I0729 18:27:21.536436   78080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:27:21.542100   78080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:27:21.542139   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 18:27:19.883514   77394 main.go:141] libmachine: (no-preload-888056) Calling .Start
	I0729 18:27:19.883693   77394 main.go:141] libmachine: (no-preload-888056) Ensuring networks are active...
	I0729 18:27:19.884447   77394 main.go:141] libmachine: (no-preload-888056) Ensuring network default is active
	I0729 18:27:19.884847   77394 main.go:141] libmachine: (no-preload-888056) Ensuring network mk-no-preload-888056 is active
	I0729 18:27:19.885240   77394 main.go:141] libmachine: (no-preload-888056) Getting domain xml...
	I0729 18:27:19.886133   77394 main.go:141] libmachine: (no-preload-888056) Creating domain...
	I0729 18:27:21.226599   77394 main.go:141] libmachine: (no-preload-888056) Waiting to get IP...
	I0729 18:27:21.227673   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.228215   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.228278   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.228178   79288 retry.go:31] will retry after 290.676407ms: waiting for machine to come up
	I0729 18:27:21.520818   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.521458   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.521480   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.521360   79288 retry.go:31] will retry after 266.145355ms: waiting for machine to come up
	I0729 18:27:21.789603   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.790170   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.790200   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.790137   79288 retry.go:31] will retry after 464.137123ms: waiting for machine to come up
	I0729 18:27:22.255586   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:22.256159   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:22.256184   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:22.256098   79288 retry.go:31] will retry after 562.330595ms: waiting for machine to come up
	I0729 18:27:21.691280   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:23.188725   77859 node_ready.go:49] node "default-k8s-diff-port-502055" has status "Ready":"True"
	I0729 18:27:23.188758   77859 node_ready.go:38] duration metric: took 7.507549954s for node "default-k8s-diff-port-502055" to be "Ready" ...
	I0729 18:27:23.188772   77859 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:23.197714   77859 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.204037   77859 pod_ready.go:92] pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:23.204065   77859 pod_ready.go:81] duration metric: took 6.32123ms for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.204086   77859 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.211765   77859 pod_ready.go:92] pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:23.211791   77859 pod_ready.go:81] duration metric: took 7.69614ms for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.211803   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:21.544757   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:24.043649   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:23.329902   78080 crio.go:462] duration metric: took 1.793505279s to copy over tarball
	I0729 18:27:23.329979   78080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:27:26.453768   78080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.123735537s)
	I0729 18:27:26.453800   78080 crio.go:469] duration metric: took 3.123869338s to extract the tarball
	I0729 18:27:26.453809   78080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:27:26.501748   78080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:26.538093   78080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:27:26.538124   78080 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:27:26.538226   78080 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.538297   78080 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 18:27:26.538387   78080 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.538232   78080 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.538441   78080 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.538303   78080 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.538277   78080 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.538783   78080 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.540806   78080 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.540823   78080 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.540847   78080 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.540858   78080 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 18:27:26.540806   78080 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.540894   78080 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.540937   78080 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.540987   78080 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.700993   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.704402   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.712647   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.714034   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.715935   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.753888   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.758588   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 18:27:26.837981   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.844473   78080 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 18:27:26.844532   78080 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.844578   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.877082   78080 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 18:27:26.877134   78080 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.877183   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.889792   78080 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 18:27:26.889887   78080 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.889842   78080 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 18:27:26.889944   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.889983   78080 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.890034   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.916338   78080 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 18:27:26.916388   78080 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.916440   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.916437   78080 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 18:27:26.916540   78080 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.916581   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.942747   78080 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 18:27:26.942794   78080 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 18:27:26.942839   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:27.056976   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:27.056976   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 18:27:27.057045   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:27.057071   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:27.057101   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:27.057152   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:27.057178   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 18:27:27.219396   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 18:27:22.820490   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:22.820969   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:22.820993   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:22.820906   79288 retry.go:31] will retry after 728.452145ms: waiting for machine to come up
	I0729 18:27:23.550655   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:23.551337   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:23.551361   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:23.551287   79288 retry.go:31] will retry after 782.583051ms: waiting for machine to come up
	I0729 18:27:24.335785   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:24.336257   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:24.336310   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:24.336235   79288 retry.go:31] will retry after 1.040109521s: waiting for machine to come up
	I0729 18:27:25.377676   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:25.378187   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:25.378231   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:25.378153   79288 retry.go:31] will retry after 1.276093038s: waiting for machine to come up
	I0729 18:27:26.655479   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:26.655922   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:26.655950   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:26.655872   79288 retry.go:31] will retry after 1.267687539s: waiting for machine to come up
	I0729 18:27:25.219175   77859 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.225735   77859 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.718741   77859 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.718772   77859 pod_ready.go:81] duration metric: took 4.506959705s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.718786   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.723687   77859 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.723709   77859 pod_ready.go:81] duration metric: took 4.915901ms for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.723720   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.728504   77859 pod_ready.go:92] pod "kube-proxy-cgdm8" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.728526   77859 pod_ready.go:81] duration metric: took 4.797185ms for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.728538   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.733036   77859 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.733061   77859 pod_ready.go:81] duration metric: took 4.514471ms for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.733073   77859 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:29.739966   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:26.044607   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:28.543664   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.219541   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 18:27:27.223329   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 18:27:27.223406   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 18:27:27.223450   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 18:27:27.223492   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 18:27:27.223536   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 18:27:27.223567   78080 cache_images.go:92] duration metric: took 685.427642ms to LoadCachedImages
	W0729 18:27:27.223653   78080 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0729 18:27:27.223672   78080 kubeadm.go:934] updating node { 192.168.50.70 8443 v1.20.0 crio true true} ...
	I0729 18:27:27.223785   78080 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386663 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:27.223866   78080 ssh_runner.go:195] Run: crio config
	I0729 18:27:27.273186   78080 cni.go:84] Creating CNI manager for ""
	I0729 18:27:27.273207   78080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:27.273217   78080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:27.273241   78080 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.70 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386663 NodeName:old-k8s-version-386663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 18:27:27.273424   78080 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386663"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:27.273498   78080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 18:27:27.285247   78080 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:27.285327   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:27.295747   78080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 18:27:27.314192   78080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:27:27.331654   78080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 18:27:27.351717   78080 ssh_runner.go:195] Run: grep 192.168.50.70	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:27.356205   78080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:27.370446   78080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:27.509250   78080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:27.528776   78080 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663 for IP: 192.168.50.70
	I0729 18:27:27.528804   78080 certs.go:194] generating shared ca certs ...
	I0729 18:27:27.528823   78080 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:27.528991   78080 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:27.529045   78080 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:27.529061   78080 certs.go:256] generating profile certs ...
	I0729 18:27:27.529194   78080 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/client.key
	I0729 18:27:27.529308   78080 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key.71ea3f9f
	I0729 18:27:27.529364   78080 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key
	I0729 18:27:27.529529   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:27.529569   78080 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:27.529584   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:27.529614   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:27.529645   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:27.529689   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:27.529751   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:27.530573   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:27.582122   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:27.626846   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:27.663609   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:27.700294   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 18:27:27.746614   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:27.785212   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:27.834479   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:27:27.866939   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:27.892613   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:27.919059   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:27.947557   78080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:27.968625   78080 ssh_runner.go:195] Run: openssl version
	I0729 18:27:27.976500   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:27.991016   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:27.996228   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:27.996285   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:28.002529   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:28.013844   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:28.025388   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.029982   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.030042   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.036362   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:28.050134   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:28.062742   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.067240   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.067293   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.072973   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:28.084143   78080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:28.089526   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:28.096556   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:28.103044   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:28.109337   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:28.115455   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:28.121449   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:28.127395   78080 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:28.127504   78080 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:28.127581   78080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:28.176772   78080 cri.go:89] found id: ""
	I0729 18:27:28.176837   78080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:28.187955   78080 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:28.187979   78080 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:28.188034   78080 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:28.197926   78080 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:28.199364   78080 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-386663" does not appear in /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:27:28.200382   78080 kubeconfig.go:62] /home/jenkins/minikube-integration/19345-11206/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-386663" cluster setting kubeconfig missing "old-k8s-version-386663" context setting]
	I0729 18:27:28.201737   78080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:28.287712   78080 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:28.300675   78080 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.70
	I0729 18:27:28.300716   78080 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:28.300728   78080 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:28.300795   78080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:28.343880   78080 cri.go:89] found id: ""
	I0729 18:27:28.343962   78080 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:28.362391   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:28.372805   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:28.372830   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:28.372882   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:27:28.383540   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:28.383629   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:28.396564   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:27:28.409151   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:28.409208   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:28.422243   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:27:28.434736   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:28.434839   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:28.447681   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:27:28.460008   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:28.460073   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:28.472647   78080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:28.484179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:28.634526   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.206575   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.449626   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.550859   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.681945   78080 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:29.682015   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:30.182098   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:30.682977   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:31.182152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:31.682468   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:32.183031   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:27.924957   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:27.925430   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:27.925461   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:27.925378   79288 retry.go:31] will retry after 1.455979038s: waiting for machine to come up
	I0729 18:27:29.383257   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:29.383769   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:29.383793   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:29.383722   79288 retry.go:31] will retry after 1.862834258s: waiting for machine to come up
	I0729 18:27:31.248806   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:31.249394   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:31.249414   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:31.249344   79288 retry.go:31] will retry after 3.203097967s: waiting for machine to come up
	I0729 18:27:32.242350   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:34.738663   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:31.043735   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:33.543152   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:32.682567   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:33.182100   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:33.682494   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.183075   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.683115   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:35.183094   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:35.683092   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:36.182173   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:36.682843   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:37.182324   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.453552   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:34.453906   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:34.453930   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:34.453852   79288 retry.go:31] will retry after 3.166208105s: waiting for machine to come up
	I0729 18:27:36.739239   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:38.740812   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:35.543428   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:38.042603   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:37.622330   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.622738   77394 main.go:141] libmachine: (no-preload-888056) Found IP for machine: 192.168.72.80
	I0729 18:27:37.622767   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has current primary IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.622779   77394 main.go:141] libmachine: (no-preload-888056) Reserving static IP address...
	I0729 18:27:37.623108   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "no-preload-888056", mac: "52:54:00:b2:b0:1a", ip: "192.168.72.80"} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.623144   77394 main.go:141] libmachine: (no-preload-888056) DBG | skip adding static IP to network mk-no-preload-888056 - found existing host DHCP lease matching {name: "no-preload-888056", mac: "52:54:00:b2:b0:1a", ip: "192.168.72.80"}
	I0729 18:27:37.623160   77394 main.go:141] libmachine: (no-preload-888056) Reserved static IP address: 192.168.72.80
	I0729 18:27:37.623174   77394 main.go:141] libmachine: (no-preload-888056) Waiting for SSH to be available...
	I0729 18:27:37.623183   77394 main.go:141] libmachine: (no-preload-888056) DBG | Getting to WaitForSSH function...
	I0729 18:27:37.625391   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.625732   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.625759   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.625927   77394 main.go:141] libmachine: (no-preload-888056) DBG | Using SSH client type: external
	I0729 18:27:37.625948   77394 main.go:141] libmachine: (no-preload-888056) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa (-rw-------)
	I0729 18:27:37.625994   77394 main.go:141] libmachine: (no-preload-888056) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:27:37.626008   77394 main.go:141] libmachine: (no-preload-888056) DBG | About to run SSH command:
	I0729 18:27:37.626020   77394 main.go:141] libmachine: (no-preload-888056) DBG | exit 0
	I0729 18:27:37.750587   77394 main.go:141] libmachine: (no-preload-888056) DBG | SSH cmd err, output: <nil>: 
	I0729 18:27:37.750986   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetConfigRaw
	I0729 18:27:37.751717   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:37.754387   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.754753   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.754781   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.754995   77394 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/config.json ...
	I0729 18:27:37.755184   77394 machine.go:94] provisionDockerMachine start ...
	I0729 18:27:37.755207   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:37.755397   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.757649   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.757965   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.757988   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.758128   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.758297   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.758463   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.758599   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.758754   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.758918   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.758927   77394 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:27:37.862940   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:27:37.862976   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:37.863205   77394 buildroot.go:166] provisioning hostname "no-preload-888056"
	I0729 18:27:37.863234   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:37.863425   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.866190   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.866538   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.866565   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.866705   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.866878   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.867046   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.867166   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.867307   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.867478   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.867490   77394 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-888056 && echo "no-preload-888056" | sudo tee /etc/hostname
	I0729 18:27:37.985031   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-888056
	
	I0729 18:27:37.985070   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.987577   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.987917   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.987945   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.988126   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.988311   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.988469   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.988601   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.988786   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.988994   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.989012   77394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-888056' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-888056/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-888056' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:27:38.103831   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:27:38.103853   77394 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:27:38.103870   77394 buildroot.go:174] setting up certificates
	I0729 18:27:38.103878   77394 provision.go:84] configureAuth start
	I0729 18:27:38.103886   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:38.104166   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:38.107080   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.107493   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.107521   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.107690   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.110087   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.110495   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.110520   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.110738   77394 provision.go:143] copyHostCerts
	I0729 18:27:38.110793   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:27:38.110802   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:27:38.110853   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:27:38.110968   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:27:38.110978   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:27:38.110998   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:27:38.111056   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:27:38.111063   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:27:38.111080   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:27:38.111149   77394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.no-preload-888056 san=[127.0.0.1 192.168.72.80 localhost minikube no-preload-888056]
	I0729 18:27:38.327305   77394 provision.go:177] copyRemoteCerts
	I0729 18:27:38.327378   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:27:38.327407   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.330008   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.330304   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.330327   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.330516   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.330739   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.330908   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.331071   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:38.414678   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:27:38.443418   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:27:38.469248   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 18:27:38.494014   77394 provision.go:87] duration metric: took 390.106553ms to configureAuth
	I0729 18:27:38.494049   77394 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:27:38.494245   77394 config.go:182] Loaded profile config "no-preload-888056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:27:38.494357   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.497162   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.497586   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.497620   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.497946   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.498137   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.498328   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.498566   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.498766   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:38.498940   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:38.498955   77394 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:27:38.762438   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:27:38.762462   77394 machine.go:97] duration metric: took 1.007266999s to provisionDockerMachine
	I0729 18:27:38.762473   77394 start.go:293] postStartSetup for "no-preload-888056" (driver="kvm2")
	I0729 18:27:38.762484   77394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:27:38.762511   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:38.762797   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:27:38.762832   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.765677   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.766031   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.766054   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.766222   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.766432   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.766621   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.766774   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:38.854492   77394 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:27:38.858934   77394 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:27:38.858962   77394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:27:38.859041   77394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:27:38.859136   77394 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:27:38.859251   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:27:38.869459   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:38.894422   77394 start.go:296] duration metric: took 131.935433ms for postStartSetup
	I0729 18:27:38.894466   77394 fix.go:56] duration metric: took 19.034987866s for fixHost
	I0729 18:27:38.894492   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.897266   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.897654   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.897684   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.897890   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.898102   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.898250   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.898356   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.898547   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:38.898721   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:38.898732   77394 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:27:39.003526   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277658.970659996
	
	I0729 18:27:39.003571   77394 fix.go:216] guest clock: 1722277658.970659996
	I0729 18:27:39.003581   77394 fix.go:229] Guest: 2024-07-29 18:27:38.970659996 +0000 UTC Remote: 2024-07-29 18:27:38.8944731 +0000 UTC m=+356.533366653 (delta=76.186896ms)
	I0729 18:27:39.003600   77394 fix.go:200] guest clock delta is within tolerance: 76.186896ms
	I0729 18:27:39.003605   77394 start.go:83] releasing machines lock for "no-preload-888056", held for 19.144159359s
	I0729 18:27:39.003622   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.003881   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:39.006550   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.006850   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.006886   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.007005   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007597   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007779   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007879   77394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:27:39.007939   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:39.008001   77394 ssh_runner.go:195] Run: cat /version.json
	I0729 18:27:39.008026   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:39.010634   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.010941   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.010965   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.010984   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.011257   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:39.011442   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.011474   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.011487   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:39.011632   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:39.011678   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:39.011782   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:39.011951   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:39.011985   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:39.012094   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:39.114446   77394 ssh_runner.go:195] Run: systemctl --version
	I0729 18:27:39.120848   77394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:39.266976   77394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:39.273603   77394 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:39.273670   77394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:39.295511   77394 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:39.295533   77394 start.go:495] detecting cgroup driver to use...
	I0729 18:27:39.295593   77394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:39.313692   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:39.328435   77394 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:39.328502   77394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:39.342580   77394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:39.356694   77394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:39.474555   77394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:39.632766   77394 docker.go:233] disabling docker service ...
	I0729 18:27:39.632827   77394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:39.648961   77394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:39.663277   77394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:39.813329   77394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:39.944017   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:39.957624   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:39.976348   77394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 18:27:39.976401   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:39.986672   77394 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:39.986735   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:39.996867   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.007547   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.018141   77394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:40.029258   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.040007   77394 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.057611   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.068107   77394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:40.077798   77394 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:40.077877   77394 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:40.091040   77394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:40.100846   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:40.227049   77394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:40.368213   77394 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:40.368295   77394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:40.374168   77394 start.go:563] Will wait 60s for crictl version
	I0729 18:27:40.374239   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.378268   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:40.422500   77394 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:40.422579   77394 ssh_runner.go:195] Run: crio --version
	I0729 18:27:40.451170   77394 ssh_runner.go:195] Run: crio --version
	I0729 18:27:40.481789   77394 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 18:27:37.682180   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:38.182453   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:38.682639   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:39.182874   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:39.682496   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.182727   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.683073   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:41.182060   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:41.682421   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:42.182813   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.483209   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:40.486303   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:40.486738   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:40.486768   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:40.487032   77394 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:40.491318   77394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:40.505196   77394 kubeadm.go:883] updating cluster {Name:no-preload-888056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:40.505303   77394 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 18:27:40.505333   77394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:40.541356   77394 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 18:27:40.541380   77394 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:27:40.541445   77394 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.541452   77394 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.541465   77394 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.541495   77394 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.541503   77394 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.541527   77394 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.541583   77394 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.542060   77394 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 18:27:40.543507   77394 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.543519   77394 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.543505   77394 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 18:27:40.543535   77394 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.543504   77394 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.543761   77394 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.543799   77394 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.543999   77394 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.693026   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.709057   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.715664   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.720337   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.746126   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 18:27:40.748805   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.759200   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.768613   77394 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 18:27:40.768659   77394 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.768705   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.812940   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.852143   77394 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 18:27:40.852173   77394 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 18:27:40.852191   77394 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.852206   77394 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.852237   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.852249   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.890477   77394 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 18:27:40.890521   77394 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.890566   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991390   77394 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 18:27:40.991435   77394 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.991462   77394 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 18:27:40.991486   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991501   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.991508   77394 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.991548   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991556   77394 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 18:27:40.991579   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.991595   77394 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.991609   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.991654   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991694   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:41.087626   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:41.087736   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:41.087742   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:41.087782   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:41.087819   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 18:27:41.087830   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:41.087883   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.091774   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 18:27:41.091828   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:41.091858   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:41.091873   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:41.104679   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 18:27:41.104702   77394 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.104733   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 18:27:41.104750   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.155992   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 18:27:41.156114   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 18:27:41.156227   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:41.169410   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 18:27:41.169535   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:41.176103   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:41.176116   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 18:27:41.176214   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:41.241044   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:43.739887   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:40.543004   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:43.044338   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:42.682911   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:43.182279   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:43.682506   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.182109   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.682593   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:45.183002   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:45.682275   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:46.182491   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:46.683027   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:47.182311   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.874768   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.769989933s)
	I0729 18:27:44.874798   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 18:27:44.874827   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:44.874861   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (3.71860957s)
	I0729 18:27:44.874894   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 18:27:44.874906   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:44.874930   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.705380577s)
	I0729 18:27:44.874947   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 18:27:44.874972   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (3.698734733s)
	I0729 18:27:44.875001   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 18:27:46.333065   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.458135446s)
	I0729 18:27:46.333109   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 18:27:46.333137   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:46.333175   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:45.739935   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.740654   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:45.542272   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.543683   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.682979   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.183024   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.682708   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:49.182427   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:49.682335   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:50.182146   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:50.682716   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:51.182231   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:51.683106   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:52.182739   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.194389   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.861190748s)
	I0729 18:27:48.194419   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 18:27:48.194443   77394 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:48.194483   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:50.159353   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.964849018s)
	I0729 18:27:50.159384   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 18:27:50.159427   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:50.159494   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:52.256998   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.097482067s)
	I0729 18:27:52.257038   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 18:27:52.257075   77394 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:52.257125   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:50.239878   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.740167   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:50.042299   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.042567   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:54.043462   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.682628   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:53.182081   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:53.682919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:54.183194   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:54.682506   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:55.182992   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:55.682152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:56.183083   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:56.682897   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.182789   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:52.899503   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 18:27:52.899539   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:52.899594   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:54.868011   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.968389841s)
	I0729 18:27:54.868043   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 18:27:54.868075   77394 cache_images.go:123] Successfully loaded all cached images
	I0729 18:27:54.868080   77394 cache_images.go:92] duration metric: took 14.326689217s to LoadCachedImages
	I0729 18:27:54.868088   77394 kubeadm.go:934] updating node { 192.168.72.80 8443 v1.31.0-beta.0 crio true true} ...
	I0729 18:27:54.868226   77394 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-888056 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:54.868305   77394 ssh_runner.go:195] Run: crio config
	I0729 18:27:54.928569   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:27:54.928591   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:54.928604   77394 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:54.928633   77394 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.80 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-888056 NodeName:no-preload-888056 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:27:54.928800   77394 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-888056"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:54.928871   77394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 18:27:54.939479   77394 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:54.939534   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:54.948928   77394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0729 18:27:54.966700   77394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 18:27:54.984218   77394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0729 18:27:55.000813   77394 ssh_runner.go:195] Run: grep 192.168.72.80	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:55.004529   77394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:55.016140   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:55.141053   77394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:55.158874   77394 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056 for IP: 192.168.72.80
	I0729 18:27:55.158897   77394 certs.go:194] generating shared ca certs ...
	I0729 18:27:55.158918   77394 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:55.159074   77394 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:55.159136   77394 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:55.159150   77394 certs.go:256] generating profile certs ...
	I0729 18:27:55.159245   77394 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/client.key
	I0729 18:27:55.159320   77394 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.key.f09a151f
	I0729 18:27:55.159373   77394 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.key
	I0729 18:27:55.159511   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:55.159552   77394 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:55.159566   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:55.159600   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:55.159641   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:55.159680   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:55.159734   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:55.160575   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:55.211823   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:55.248637   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:55.287972   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:55.317920   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 18:27:55.346034   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:55.377569   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:55.402593   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 18:27:55.427969   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:55.452060   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:55.476635   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:55.500831   77394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:55.518744   77394 ssh_runner.go:195] Run: openssl version
	I0729 18:27:55.524865   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:55.536601   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.541752   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.541807   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.548070   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:55.559866   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:55.571833   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.576304   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.576342   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.582204   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:55.594531   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:55.605773   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.610585   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.610633   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.616478   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:55.628160   77394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:55.632691   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:55.638793   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:55.644678   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:55.651117   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:55.657397   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:55.663351   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:55.670080   77394 kubeadm.go:392] StartCluster: {Name:no-preload-888056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:55.670183   77394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:55.670248   77394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:55.712280   77394 cri.go:89] found id: ""
	I0729 18:27:55.712343   77394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:55.722878   77394 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:55.722898   77394 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:55.722935   77394 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:55.732704   77394 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:55.733646   77394 kubeconfig.go:125] found "no-preload-888056" server: "https://192.168.72.80:8443"
	I0729 18:27:55.736512   77394 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:55.748360   77394 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.80
	I0729 18:27:55.748403   77394 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:55.748416   77394 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:55.748464   77394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:55.789773   77394 cri.go:89] found id: ""
	I0729 18:27:55.789854   77394 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:55.808905   77394 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:55.819969   77394 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:55.819991   77394 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:55.820064   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:27:55.829392   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:55.829445   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:55.838934   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:27:55.848659   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:55.848720   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:55.859490   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:27:55.870024   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:55.870076   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:55.881599   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:27:55.891805   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:55.891869   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:55.901750   77394 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:55.911525   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:56.021031   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.075545   77394 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.054482988s)
	I0729 18:27:57.075571   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.302701   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.382837   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:55.261397   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:57.738688   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:59.739828   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:56.543870   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:59.043285   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:57.682237   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.182211   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.682456   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:59.182669   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:59.682863   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:00.182261   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:00.682993   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:01.182832   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:01.682899   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:02.182765   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.492480   77394 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:57.492580   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.993240   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.492965   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.517442   77394 api_server.go:72] duration metric: took 1.024961129s to wait for apiserver process to appear ...
	I0729 18:27:58.517479   77394 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:27:58.517505   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:27:58.518046   77394 api_server.go:269] stopped: https://192.168.72.80:8443/healthz: Get "https://192.168.72.80:8443/healthz": dial tcp 192.168.72.80:8443: connect: connection refused
	I0729 18:27:59.017614   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.088238   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:28:02.088265   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:28:02.088277   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.147855   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:28:02.147882   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:28:02.518439   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.525213   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:28:02.525247   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:28:03.018275   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:03.024993   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:28:03.025023   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:28:03.517564   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:03.523409   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 200:
	ok
	I0729 18:28:03.529656   77394 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 18:28:03.529687   77394 api_server.go:131] duration metric: took 5.01219984s to wait for apiserver health ...
	I0729 18:28:03.529698   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:28:03.529706   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:28:03.531527   77394 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:28:01.740935   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:03.743806   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:01.043882   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:03.542540   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:02.682331   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.182154   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.682499   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:04.182355   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:04.682338   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:05.182107   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:05.683125   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:06.182481   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:06.683153   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:07.182992   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.532788   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:28:03.544878   77394 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:28:03.586100   77394 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:28:03.604975   77394 system_pods.go:59] 8 kube-system pods found
	I0729 18:28:03.605012   77394 system_pods.go:61] "coredns-5cfdc65f69-bg5j4" [7a26ffbb-014c-4cf7-b302-214cf78374bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:28:03.605022   77394 system_pods.go:61] "etcd-no-preload-888056" [d76f2eb7-67d9-4ba0-8d2f-acfc78559651] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:28:03.605036   77394 system_pods.go:61] "kube-apiserver-no-preload-888056" [1dbea0ee-58be-47ca-b4ab-94065413768d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:28:03.605044   77394 system_pods.go:61] "kube-controller-manager-no-preload-888056" [fb8ce9d9-2953-4b91-8734-87bd38a63eb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:28:03.605051   77394 system_pods.go:61] "kube-proxy-w5z2f" [2425da76-cf2d-41c9-b8db-1370ab5333c5] Running
	I0729 18:28:03.605059   77394 system_pods.go:61] "kube-scheduler-no-preload-888056" [9958567f-116d-4094-9e7e-6208f7358486] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:28:03.605066   77394 system_pods.go:61] "metrics-server-78fcd8795b-jcdcw" [c506a5f8-d569-4c3d-9b6e-21b9fc63a86a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:28:03.605073   77394 system_pods.go:61] "storage-provisioner" [ccbc4fa6-1237-46ca-ac80-34972b9a43df] Running
	I0729 18:28:03.605082   77394 system_pods.go:74] duration metric: took 18.959807ms to wait for pod list to return data ...
	I0729 18:28:03.605095   77394 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:28:03.609225   77394 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:28:03.609249   77394 node_conditions.go:123] node cpu capacity is 2
	I0729 18:28:03.609261   77394 node_conditions.go:105] duration metric: took 4.16099ms to run NodePressure ...
	I0729 18:28:03.609278   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:28:03.881440   77394 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:28:03.886401   77394 kubeadm.go:739] kubelet initialised
	I0729 18:28:03.886429   77394 kubeadm.go:740] duration metric: took 4.958282ms waiting for restarted kubelet to initialise ...
	I0729 18:28:03.886440   77394 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:28:03.891373   77394 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:05.900595   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:06.239029   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:08.240309   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:06.042541   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:08.043322   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:07.682582   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.182094   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.682613   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:09.182936   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:09.682444   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:10.182354   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:10.682183   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:11.182502   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:11.682466   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:12.182113   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.397084   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.399546   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.897981   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:10.898006   77394 pod_ready.go:81] duration metric: took 7.006606905s for pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.898014   77394 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.903064   77394 pod_ready.go:92] pod "etcd-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:10.903088   77394 pod_ready.go:81] duration metric: took 5.066249ms for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.903099   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:11.409319   77394 pod_ready.go:92] pod "kube-apiserver-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:11.409344   77394 pod_ready.go:81] duration metric: took 506.238678ms for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:11.409353   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.250001   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:12.741099   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.542146   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:13.042422   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:12.682526   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.183014   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.682449   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:14.182138   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:14.683065   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:15.182838   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:15.682680   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:16.182714   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:16.682116   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:17.182842   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.415469   77394 pod_ready.go:102] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:13.917111   77394 pod_ready.go:92] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.917134   77394 pod_ready.go:81] duration metric: took 2.507774546s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.917149   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w5z2f" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.922045   77394 pod_ready.go:92] pod "kube-proxy-w5z2f" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.922069   77394 pod_ready.go:81] duration metric: took 4.912892ms for pod "kube-proxy-w5z2f" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.922080   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.927633   77394 pod_ready.go:92] pod "kube-scheduler-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.927654   77394 pod_ready.go:81] duration metric: took 5.565409ms for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.927666   77394 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:15.934081   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:15.240105   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.740031   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:19.740077   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:15.042540   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.043335   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:19.542061   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.683114   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:18.182919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:18.683103   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:19.182074   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:19.683031   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:20.182701   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:20.682749   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:21.182949   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:21.683001   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:22.182167   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:17.935797   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:20.434416   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:21.740735   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.238828   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:21.544060   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.042058   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:22.682723   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:23.182510   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:23.683084   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:24.182220   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:24.682699   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:25.182288   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:25.682433   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:26.182919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:26.682851   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:27.182225   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:22.435465   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.935088   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:26.239694   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:28.240174   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:26.542381   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:29.043706   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:27.682408   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:28.182187   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:28.683034   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:29.182922   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:29.682990   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:29.683063   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:29.730368   78080 cri.go:89] found id: ""
	I0729 18:28:29.730405   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.730413   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:29.730419   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:29.730473   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:29.770368   78080 cri.go:89] found id: ""
	I0729 18:28:29.770398   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.770409   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:29.770426   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:29.770479   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:29.809873   78080 cri.go:89] found id: ""
	I0729 18:28:29.809898   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.809906   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:29.809911   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:29.809970   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:29.848980   78080 cri.go:89] found id: ""
	I0729 18:28:29.849006   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.849016   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:29.849023   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:29.849082   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:29.887261   78080 cri.go:89] found id: ""
	I0729 18:28:29.887292   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.887302   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:29.887311   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:29.887388   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:29.927011   78080 cri.go:89] found id: ""
	I0729 18:28:29.927041   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.927051   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:29.927058   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:29.927122   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:29.965577   78080 cri.go:89] found id: ""
	I0729 18:28:29.965609   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.965619   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:29.965625   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:29.965693   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:29.999180   78080 cri.go:89] found id: ""
	I0729 18:28:29.999210   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.999222   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:29.999233   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:29.999253   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:30.049401   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:30.049433   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:30.063903   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:30.063939   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:30.194776   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:30.194797   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:30.194812   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:30.261861   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:30.261906   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:27.434837   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:29.435257   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:31.435297   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:30.738940   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:32.740748   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:31.542494   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:33.542872   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:32.801821   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:32.814741   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:32.814815   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:32.853490   78080 cri.go:89] found id: ""
	I0729 18:28:32.853514   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.853522   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:32.853530   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:32.853580   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:32.890314   78080 cri.go:89] found id: ""
	I0729 18:28:32.890339   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.890349   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:32.890356   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:32.890435   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:32.928231   78080 cri.go:89] found id: ""
	I0729 18:28:32.928255   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.928262   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:32.928268   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:32.928314   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:32.964024   78080 cri.go:89] found id: ""
	I0729 18:28:32.964054   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.964065   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:32.964072   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:32.964136   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:33.002099   78080 cri.go:89] found id: ""
	I0729 18:28:33.002127   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.002140   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:33.002146   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:33.002195   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:33.042238   78080 cri.go:89] found id: ""
	I0729 18:28:33.042265   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.042273   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:33.042278   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:33.042331   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:33.078715   78080 cri.go:89] found id: ""
	I0729 18:28:33.078741   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.078750   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:33.078756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:33.078816   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:33.123304   78080 cri.go:89] found id: ""
	I0729 18:28:33.123334   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.123342   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:33.123351   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:33.123366   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:33.198950   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:33.198994   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:33.223566   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:33.223594   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:33.306500   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:33.306526   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:33.306541   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:33.379386   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:33.379421   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:35.926834   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:35.942218   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:35.942296   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:35.980115   78080 cri.go:89] found id: ""
	I0729 18:28:35.980142   78080 logs.go:276] 0 containers: []
	W0729 18:28:35.980153   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:35.980159   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:35.980221   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:36.015354   78080 cri.go:89] found id: ""
	I0729 18:28:36.015379   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.015387   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:36.015392   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:36.015456   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:36.056411   78080 cri.go:89] found id: ""
	I0729 18:28:36.056435   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.056445   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:36.056451   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:36.056499   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:36.099153   78080 cri.go:89] found id: ""
	I0729 18:28:36.099180   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.099188   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:36.099193   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:36.099241   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:36.133427   78080 cri.go:89] found id: ""
	I0729 18:28:36.133459   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.133470   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:36.133477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:36.133544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:36.168619   78080 cri.go:89] found id: ""
	I0729 18:28:36.168646   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.168657   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:36.168664   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:36.168723   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:36.203636   78080 cri.go:89] found id: ""
	I0729 18:28:36.203666   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.203676   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:36.203684   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:36.203747   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:36.246495   78080 cri.go:89] found id: ""
	I0729 18:28:36.246523   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.246533   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:36.246544   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:36.246561   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:36.260630   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:36.260656   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:36.337406   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:36.337424   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:36.337435   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:36.410016   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:36.410049   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:36.453458   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:36.453492   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:33.435859   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.934955   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.240070   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:37.739406   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.740035   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.543153   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:37.543467   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.543573   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.004147   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:39.018217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:39.018279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:39.054130   78080 cri.go:89] found id: ""
	I0729 18:28:39.054155   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.054166   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:39.054172   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:39.054219   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:39.090458   78080 cri.go:89] found id: ""
	I0729 18:28:39.090482   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.090490   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:39.090501   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:39.090548   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:39.126933   78080 cri.go:89] found id: ""
	I0729 18:28:39.126960   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.126971   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:39.126978   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:39.127042   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:39.162324   78080 cri.go:89] found id: ""
	I0729 18:28:39.162352   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.162381   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:39.162389   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:39.162450   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:39.202440   78080 cri.go:89] found id: ""
	I0729 18:28:39.202464   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.202471   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:39.202477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:39.202537   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:39.238314   78080 cri.go:89] found id: ""
	I0729 18:28:39.238342   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.238352   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:39.238368   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:39.238436   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:39.275545   78080 cri.go:89] found id: ""
	I0729 18:28:39.275584   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.275592   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:39.275598   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:39.275663   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:39.311575   78080 cri.go:89] found id: ""
	I0729 18:28:39.311603   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.311614   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:39.311624   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:39.311643   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:39.367667   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:39.367711   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:39.381823   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:39.381852   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:39.456060   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:39.456083   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:39.456100   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:39.531747   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:39.531784   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:42.077771   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:42.092424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:42.092512   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:42.128710   78080 cri.go:89] found id: ""
	I0729 18:28:42.128744   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.128756   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:42.128765   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:42.128834   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:42.166092   78080 cri.go:89] found id: ""
	I0729 18:28:42.166126   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.166133   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:42.166138   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:42.166186   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:42.200955   78080 cri.go:89] found id: ""
	I0729 18:28:42.200981   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.200989   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:42.200994   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:42.201053   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:38.435476   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:40.935166   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:42.240354   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:44.739322   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:41.543640   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:43.543781   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:42.240176   78080 cri.go:89] found id: ""
	I0729 18:28:42.240203   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.240212   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:42.240219   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:42.240279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:42.279844   78080 cri.go:89] found id: ""
	I0729 18:28:42.279872   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.279880   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:42.279885   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:42.279946   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:42.313071   78080 cri.go:89] found id: ""
	I0729 18:28:42.313099   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.313108   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:42.313114   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:42.313187   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:42.348540   78080 cri.go:89] found id: ""
	I0729 18:28:42.348566   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.348573   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:42.348580   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:42.348630   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:42.384688   78080 cri.go:89] found id: ""
	I0729 18:28:42.384714   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.384725   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:42.384736   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:42.384750   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:42.399178   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:42.399206   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:42.472903   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:42.472921   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:42.472937   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:42.558541   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:42.558573   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:42.599403   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:42.599432   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:45.154026   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:45.167130   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:45.167200   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:45.203627   78080 cri.go:89] found id: ""
	I0729 18:28:45.203654   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.203663   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:45.203668   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:45.203714   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:45.242293   78080 cri.go:89] found id: ""
	I0729 18:28:45.242316   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.242325   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:45.242332   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:45.242403   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:45.282253   78080 cri.go:89] found id: ""
	I0729 18:28:45.282275   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.282282   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:45.282288   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:45.282335   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:45.320151   78080 cri.go:89] found id: ""
	I0729 18:28:45.320175   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.320183   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:45.320189   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:45.320250   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:45.356210   78080 cri.go:89] found id: ""
	I0729 18:28:45.356236   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.356247   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:45.356254   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:45.356316   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:45.393083   78080 cri.go:89] found id: ""
	I0729 18:28:45.393116   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.393131   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:45.393139   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:45.393199   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:45.430235   78080 cri.go:89] found id: ""
	I0729 18:28:45.430263   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.430274   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:45.430282   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:45.430346   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:45.463068   78080 cri.go:89] found id: ""
	I0729 18:28:45.463132   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.463143   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:45.463155   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:45.463203   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:45.541411   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:45.541441   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:45.581967   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:45.582001   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:45.639427   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:45.639459   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:45.655715   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:45.655741   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:45.725820   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:42.943815   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:45.435444   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:46.739873   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:49.240293   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:46.042576   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:48.042735   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:48.226252   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:48.240419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:48.240494   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:48.271506   78080 cri.go:89] found id: ""
	I0729 18:28:48.271538   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.271550   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:48.271557   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:48.271615   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:48.305163   78080 cri.go:89] found id: ""
	I0729 18:28:48.305186   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.305198   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:48.305203   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:48.305252   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:48.336453   78080 cri.go:89] found id: ""
	I0729 18:28:48.336480   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.336492   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:48.336500   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:48.336557   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:48.368690   78080 cri.go:89] found id: ""
	I0729 18:28:48.368713   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.368720   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:48.368725   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:48.368784   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:48.401723   78080 cri.go:89] found id: ""
	I0729 18:28:48.401746   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.401753   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:48.401758   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:48.401822   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:48.439876   78080 cri.go:89] found id: ""
	I0729 18:28:48.439896   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.439903   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:48.439908   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:48.439956   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:48.473352   78080 cri.go:89] found id: ""
	I0729 18:28:48.473383   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.473394   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:48.473401   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:48.473461   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:48.506752   78080 cri.go:89] found id: ""
	I0729 18:28:48.506779   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.506788   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:48.506799   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:48.506815   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:48.547513   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:48.547535   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:48.599704   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:48.599733   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:48.613577   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:48.613604   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:48.681272   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:48.681290   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:48.681301   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:51.267397   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:51.280243   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:51.280317   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:51.314047   78080 cri.go:89] found id: ""
	I0729 18:28:51.314078   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.314090   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:51.314097   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:51.314162   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:51.346048   78080 cri.go:89] found id: ""
	I0729 18:28:51.346073   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.346080   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:51.346085   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:51.346144   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:51.380511   78080 cri.go:89] found id: ""
	I0729 18:28:51.380543   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.380553   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:51.380561   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:51.380637   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:51.415189   78080 cri.go:89] found id: ""
	I0729 18:28:51.415213   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.415220   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:51.415227   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:51.415310   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:51.454324   78080 cri.go:89] found id: ""
	I0729 18:28:51.454351   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.454380   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:51.454388   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:51.454449   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:51.488737   78080 cri.go:89] found id: ""
	I0729 18:28:51.488768   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.488779   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:51.488787   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:51.488848   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:51.528869   78080 cri.go:89] found id: ""
	I0729 18:28:51.528903   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.528912   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:51.528920   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:51.528972   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:51.566039   78080 cri.go:89] found id: ""
	I0729 18:28:51.566067   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.566075   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:51.566086   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:51.566102   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:51.604746   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:51.604774   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:51.661048   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:51.661089   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:51.675420   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:51.675447   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:51.754496   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:51.754531   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:51.754548   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:47.934575   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:49.935187   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:51.247773   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:53.740386   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:50.043378   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:52.543104   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:54.335796   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:54.350726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:54.350784   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:54.389661   78080 cri.go:89] found id: ""
	I0729 18:28:54.389683   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.389694   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:54.389701   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:54.389761   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:54.427073   78080 cri.go:89] found id: ""
	I0729 18:28:54.427100   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.427110   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:54.427117   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:54.427178   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:54.466761   78080 cri.go:89] found id: ""
	I0729 18:28:54.466793   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.466802   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:54.466808   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:54.466871   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:54.501115   78080 cri.go:89] found id: ""
	I0729 18:28:54.501144   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.501159   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:54.501167   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:54.501229   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:54.535430   78080 cri.go:89] found id: ""
	I0729 18:28:54.535461   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.535472   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:54.535480   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:54.535543   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:54.574994   78080 cri.go:89] found id: ""
	I0729 18:28:54.575024   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.575034   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:54.575041   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:54.575107   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:54.608770   78080 cri.go:89] found id: ""
	I0729 18:28:54.608792   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.608800   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:54.608805   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:54.608850   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:54.648026   78080 cri.go:89] found id: ""
	I0729 18:28:54.648050   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.648057   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:54.648066   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:54.648077   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:54.728445   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:54.728485   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:54.774752   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:54.774781   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:54.826549   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:54.826582   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:54.840366   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:54.840394   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:54.907422   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:52.434956   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:54.436125   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:56.933929   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:56.239045   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:58.239967   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:55.041898   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:57.042968   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:59.542837   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:57.408469   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:57.421855   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:57.421923   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:57.457794   78080 cri.go:89] found id: ""
	I0729 18:28:57.457816   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.457824   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:57.457829   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:57.457908   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:57.492851   78080 cri.go:89] found id: ""
	I0729 18:28:57.492880   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.492888   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:57.492894   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:57.492946   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:57.528221   78080 cri.go:89] found id: ""
	I0729 18:28:57.528249   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.528258   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:57.528265   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:57.528330   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:57.565504   78080 cri.go:89] found id: ""
	I0729 18:28:57.565536   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.565547   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:57.565554   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:57.565618   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:57.599391   78080 cri.go:89] found id: ""
	I0729 18:28:57.599418   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.599426   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:57.599432   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:57.599491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:57.643757   78080 cri.go:89] found id: ""
	I0729 18:28:57.643784   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.643798   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:57.643806   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:57.643867   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:57.680825   78080 cri.go:89] found id: ""
	I0729 18:28:57.680853   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.680864   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:57.680871   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:57.680936   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:57.714450   78080 cri.go:89] found id: ""
	I0729 18:28:57.714479   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.714490   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:57.714500   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:57.714516   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:57.798411   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:57.798437   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:57.798453   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:57.878210   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:57.878246   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:57.917476   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:57.917505   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:57.971395   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:57.971432   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:00.486419   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:00.500625   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:00.500703   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:00.539625   78080 cri.go:89] found id: ""
	I0729 18:29:00.539650   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.539659   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:00.539682   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:00.539737   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:00.577252   78080 cri.go:89] found id: ""
	I0729 18:29:00.577284   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.577297   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:00.577303   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:00.577350   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:00.611850   78080 cri.go:89] found id: ""
	I0729 18:29:00.611878   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.611886   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:00.611892   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:00.611939   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:00.648964   78080 cri.go:89] found id: ""
	I0729 18:29:00.648989   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.648996   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:00.649003   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:00.649062   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:00.686124   78080 cri.go:89] found id: ""
	I0729 18:29:00.686147   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.686156   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:00.686161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:00.686217   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:00.721166   78080 cri.go:89] found id: ""
	I0729 18:29:00.721195   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.721205   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:00.721213   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:00.721276   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:00.758394   78080 cri.go:89] found id: ""
	I0729 18:29:00.758423   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.758431   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:00.758436   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:00.758491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:00.793487   78080 cri.go:89] found id: ""
	I0729 18:29:00.793514   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.793523   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:00.793533   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:00.793549   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:00.807069   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:00.807106   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:00.880611   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:00.880629   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:00.880641   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:00.963534   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:00.963568   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:01.004145   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:01.004174   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:58.933964   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:00.934221   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:00.739676   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:02.741020   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:02.042346   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:04.541902   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:03.560985   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:03.574407   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:03.574476   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:03.608027   78080 cri.go:89] found id: ""
	I0729 18:29:03.608048   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.608057   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:03.608062   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:03.608119   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:03.644777   78080 cri.go:89] found id: ""
	I0729 18:29:03.644804   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.644814   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:03.644821   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:03.644895   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:03.684050   78080 cri.go:89] found id: ""
	I0729 18:29:03.684074   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.684082   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:03.684089   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:03.684149   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:03.724350   78080 cri.go:89] found id: ""
	I0729 18:29:03.724376   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.724383   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:03.724390   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:03.724439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:03.766859   78080 cri.go:89] found id: ""
	I0729 18:29:03.766887   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.766898   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:03.766905   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:03.766967   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:03.800535   78080 cri.go:89] found id: ""
	I0729 18:29:03.800562   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.800572   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:03.800579   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:03.800639   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:03.834991   78080 cri.go:89] found id: ""
	I0729 18:29:03.835011   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.835019   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:03.835024   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:03.835073   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:03.869159   78080 cri.go:89] found id: ""
	I0729 18:29:03.869191   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.869201   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:03.869211   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:03.869226   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:03.940451   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:03.940469   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:03.940487   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:04.020880   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:04.020910   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:04.064707   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:04.064728   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:04.121551   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:04.121587   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:06.636983   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:06.651500   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:06.651582   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:06.686556   78080 cri.go:89] found id: ""
	I0729 18:29:06.686582   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.686592   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:06.686599   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:06.686660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:06.721967   78080 cri.go:89] found id: ""
	I0729 18:29:06.721996   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.722008   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:06.722016   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:06.722115   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:06.760409   78080 cri.go:89] found id: ""
	I0729 18:29:06.760433   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.760440   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:06.760445   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:06.760499   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:06.794050   78080 cri.go:89] found id: ""
	I0729 18:29:06.794074   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.794081   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:06.794087   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:06.794143   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:06.826445   78080 cri.go:89] found id: ""
	I0729 18:29:06.826471   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.826478   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:06.826484   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:06.826544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:06.860680   78080 cri.go:89] found id: ""
	I0729 18:29:06.860700   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.860706   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:06.860712   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:06.860761   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:06.898192   78080 cri.go:89] found id: ""
	I0729 18:29:06.898215   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.898223   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:06.898229   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:06.898284   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:06.931892   78080 cri.go:89] found id: ""
	I0729 18:29:06.931920   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.931930   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:06.931940   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:06.931955   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:06.987265   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:06.987294   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:07.043520   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:07.043547   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:07.056995   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:07.057019   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:07.124932   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:07.124956   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:07.124971   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:03.435778   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:05.936004   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:05.239352   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:07.239383   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:06.542526   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:08.543497   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:09.708947   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:09.723497   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:09.723565   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:09.762686   78080 cri.go:89] found id: ""
	I0729 18:29:09.762714   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.762725   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:09.762733   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:09.762797   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:09.799674   78080 cri.go:89] found id: ""
	I0729 18:29:09.799699   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.799708   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:09.799715   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:09.799775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:09.836121   78080 cri.go:89] found id: ""
	I0729 18:29:09.836147   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.836156   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:09.836161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:09.836209   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:09.872758   78080 cri.go:89] found id: ""
	I0729 18:29:09.872783   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.872791   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:09.872797   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:09.872842   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:09.911681   78080 cri.go:89] found id: ""
	I0729 18:29:09.911711   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.911719   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:09.911724   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:09.911773   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:09.951531   78080 cri.go:89] found id: ""
	I0729 18:29:09.951554   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.951561   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:09.951567   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:09.951624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:09.985568   78080 cri.go:89] found id: ""
	I0729 18:29:09.985597   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.985606   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:09.985612   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:09.985661   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:10.020369   78080 cri.go:89] found id: ""
	I0729 18:29:10.020394   78080 logs.go:276] 0 containers: []
	W0729 18:29:10.020402   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:10.020409   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:10.020421   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:10.076538   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:10.076574   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:10.090954   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:10.090980   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:10.165843   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:10.165875   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:10.165890   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:10.242438   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:10.242469   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:08.434575   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:10.934523   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:09.744446   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:12.239540   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:14.242060   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:10.544272   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:13.043064   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:12.781369   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:12.797066   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:12.797160   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:12.832500   78080 cri.go:89] found id: ""
	I0729 18:29:12.832528   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.832545   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:12.832552   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:12.832615   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:12.866390   78080 cri.go:89] found id: ""
	I0729 18:29:12.866420   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.866428   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:12.866434   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:12.866494   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:12.901616   78080 cri.go:89] found id: ""
	I0729 18:29:12.901636   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.901644   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:12.901649   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:12.901713   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:12.935954   78080 cri.go:89] found id: ""
	I0729 18:29:12.935976   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.935985   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:12.935993   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:12.936053   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:12.970570   78080 cri.go:89] found id: ""
	I0729 18:29:12.970623   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.970637   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:12.970645   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:12.970702   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:13.008629   78080 cri.go:89] found id: ""
	I0729 18:29:13.008658   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.008666   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:13.008672   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:13.008725   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:13.045689   78080 cri.go:89] found id: ""
	I0729 18:29:13.045713   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.045721   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:13.045726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:13.045773   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:13.084707   78080 cri.go:89] found id: ""
	I0729 18:29:13.084735   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.084745   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:13.084756   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:13.084774   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:13.161884   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:13.161920   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:13.205377   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:13.205410   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:13.258161   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:13.258189   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:13.272208   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:13.272240   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:13.347519   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:15.848068   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:15.861773   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:15.861851   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:15.902421   78080 cri.go:89] found id: ""
	I0729 18:29:15.902449   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.902458   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:15.902466   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:15.902532   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:15.939552   78080 cri.go:89] found id: ""
	I0729 18:29:15.939576   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.939583   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:15.939588   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:15.939645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:15.974424   78080 cri.go:89] found id: ""
	I0729 18:29:15.974454   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.974463   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:15.974468   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:15.974516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:16.010955   78080 cri.go:89] found id: ""
	I0729 18:29:16.010993   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.011000   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:16.011006   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:16.011062   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:16.046785   78080 cri.go:89] found id: ""
	I0729 18:29:16.046815   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.046825   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:16.046832   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:16.046887   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:16.082691   78080 cri.go:89] found id: ""
	I0729 18:29:16.082721   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.082731   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:16.082739   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:16.082796   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:16.127633   78080 cri.go:89] found id: ""
	I0729 18:29:16.127663   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.127676   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:16.127684   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:16.127741   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:16.162641   78080 cri.go:89] found id: ""
	I0729 18:29:16.162662   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.162670   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:16.162684   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:16.162695   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:16.215132   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:16.215162   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:16.229581   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:16.229607   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:16.303178   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:16.303198   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:16.303212   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:16.383739   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:16.383775   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:12.934751   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:14.934965   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:16.739047   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:18.739145   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:15.043163   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:17.544340   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:18.924292   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:18.937571   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:18.937626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:18.970523   78080 cri.go:89] found id: ""
	I0729 18:29:18.970554   78080 logs.go:276] 0 containers: []
	W0729 18:29:18.970563   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:18.970568   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:18.970624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:19.005448   78080 cri.go:89] found id: ""
	I0729 18:29:19.005471   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.005478   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:19.005483   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:19.005538   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:19.044352   78080 cri.go:89] found id: ""
	I0729 18:29:19.044377   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.044386   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:19.044393   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:19.044448   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:19.079288   78080 cri.go:89] found id: ""
	I0729 18:29:19.079317   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.079327   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:19.079333   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:19.079402   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:19.122932   78080 cri.go:89] found id: ""
	I0729 18:29:19.122954   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.122961   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:19.122967   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:19.123020   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:19.166992   78080 cri.go:89] found id: ""
	I0729 18:29:19.167018   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.167025   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:19.167031   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:19.167103   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:19.215301   78080 cri.go:89] found id: ""
	I0729 18:29:19.215331   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.215341   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:19.215355   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:19.215419   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:19.267635   78080 cri.go:89] found id: ""
	I0729 18:29:19.267657   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.267664   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:19.267671   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:19.267682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:19.319924   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:19.319962   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:19.333987   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:19.334010   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:19.406541   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:19.406558   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:19.406571   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:19.487388   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:19.487426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:22.027745   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:22.041145   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:22.041218   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:22.080000   78080 cri.go:89] found id: ""
	I0729 18:29:22.080022   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.080029   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:22.080034   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:22.080079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:22.116385   78080 cri.go:89] found id: ""
	I0729 18:29:22.116415   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.116425   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:22.116431   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:22.116492   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:22.150530   78080 cri.go:89] found id: ""
	I0729 18:29:22.150552   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.150559   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:22.150565   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:22.150621   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:22.188782   78080 cri.go:89] found id: ""
	I0729 18:29:22.188808   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.188817   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:22.188822   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:22.188873   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:17.434007   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:19.434864   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:21.935573   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:20.739852   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:23.239853   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:20.044010   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:22.542952   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:24.543614   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:22.227117   78080 cri.go:89] found id: ""
	I0729 18:29:22.227152   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.227162   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:22.227169   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:22.227234   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:22.263057   78080 cri.go:89] found id: ""
	I0729 18:29:22.263079   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.263086   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:22.263091   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:22.263145   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:22.297368   78080 cri.go:89] found id: ""
	I0729 18:29:22.297391   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.297399   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:22.297406   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:22.297466   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:22.334117   78080 cri.go:89] found id: ""
	I0729 18:29:22.334149   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.334159   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:22.334170   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:22.334184   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:22.349344   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:22.349369   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:22.415720   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:22.415743   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:22.415758   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:22.494937   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:22.494971   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:22.536352   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:22.536382   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:25.087795   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:25.103985   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:25.104050   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:25.158532   78080 cri.go:89] found id: ""
	I0729 18:29:25.158562   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.158572   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:25.158580   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:25.158641   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:25.216740   78080 cri.go:89] found id: ""
	I0729 18:29:25.216762   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.216769   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:25.216775   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:25.216827   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:25.254827   78080 cri.go:89] found id: ""
	I0729 18:29:25.254855   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.254865   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:25.254872   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:25.254934   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:25.289377   78080 cri.go:89] found id: ""
	I0729 18:29:25.289407   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.289417   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:25.289424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:25.289484   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:25.328111   78080 cri.go:89] found id: ""
	I0729 18:29:25.328144   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.328153   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:25.328161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:25.328224   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:25.364779   78080 cri.go:89] found id: ""
	I0729 18:29:25.364808   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.364815   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:25.364827   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:25.364874   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:25.402906   78080 cri.go:89] found id: ""
	I0729 18:29:25.402935   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.402942   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:25.402948   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:25.403007   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:25.438747   78080 cri.go:89] found id: ""
	I0729 18:29:25.438770   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.438778   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:25.438787   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:25.438803   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:25.452803   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:25.452829   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:25.527575   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:25.527593   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:25.527610   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:25.622437   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:25.622482   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:25.661451   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:25.661478   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:23.936249   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:26.434496   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:25.739358   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:27.739702   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:27.043125   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:29.542130   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:28.213898   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:28.230013   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:28.230071   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:28.265484   78080 cri.go:89] found id: ""
	I0729 18:29:28.265511   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.265521   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:28.265530   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:28.265594   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:28.306374   78080 cri.go:89] found id: ""
	I0729 18:29:28.306428   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.306441   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:28.306448   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:28.306501   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:28.340274   78080 cri.go:89] found id: ""
	I0729 18:29:28.340299   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.340309   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:28.340316   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:28.340379   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:28.373928   78080 cri.go:89] found id: ""
	I0729 18:29:28.373973   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.373982   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:28.373990   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:28.374052   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:28.407075   78080 cri.go:89] found id: ""
	I0729 18:29:28.407107   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.407120   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:28.407129   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:28.407215   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:28.444501   78080 cri.go:89] found id: ""
	I0729 18:29:28.444528   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.444536   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:28.444543   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:28.444614   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:28.487513   78080 cri.go:89] found id: ""
	I0729 18:29:28.487540   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.487548   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:28.487554   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:28.487611   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:28.521957   78080 cri.go:89] found id: ""
	I0729 18:29:28.521990   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.522000   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:28.522011   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:28.522027   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:28.536880   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:28.536918   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:28.609486   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:28.609513   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:28.609528   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:28.694086   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:28.694125   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:28.733930   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:28.733964   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:31.292260   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:31.305840   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:31.305899   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:31.342510   78080 cri.go:89] found id: ""
	I0729 18:29:31.342539   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.342550   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:31.342557   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:31.342613   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:31.375093   78080 cri.go:89] found id: ""
	I0729 18:29:31.375118   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.375128   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:31.375135   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:31.375198   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:31.408554   78080 cri.go:89] found id: ""
	I0729 18:29:31.408576   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.408583   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:31.408588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:31.408660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:31.448748   78080 cri.go:89] found id: ""
	I0729 18:29:31.448774   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.448783   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:31.448796   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:31.448855   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:31.483541   78080 cri.go:89] found id: ""
	I0729 18:29:31.483564   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.483572   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:31.483578   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:31.483637   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:31.518173   78080 cri.go:89] found id: ""
	I0729 18:29:31.518198   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.518209   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:31.518217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:31.518279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:31.553345   78080 cri.go:89] found id: ""
	I0729 18:29:31.553371   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.553379   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:31.553384   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:31.553439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:31.591857   78080 cri.go:89] found id: ""
	I0729 18:29:31.591887   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.591905   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:31.591916   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:31.591929   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:31.648404   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:31.648436   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:31.661455   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:31.661477   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:31.732978   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:31.732997   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:31.733009   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:31.812105   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:31.812145   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:28.435517   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:30.436822   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:30.239755   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:32.739231   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.739534   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:31.542847   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:33.543096   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.353079   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:34.366759   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:34.366817   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:34.400944   78080 cri.go:89] found id: ""
	I0729 18:29:34.400974   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.400984   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:34.400991   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:34.401055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:34.439348   78080 cri.go:89] found id: ""
	I0729 18:29:34.439373   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.439383   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:34.439395   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:34.439444   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:34.473969   78080 cri.go:89] found id: ""
	I0729 18:29:34.473991   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.474010   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:34.474017   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:34.474080   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:34.507741   78080 cri.go:89] found id: ""
	I0729 18:29:34.507770   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.507778   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:34.507784   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:34.507845   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:34.543794   78080 cri.go:89] found id: ""
	I0729 18:29:34.543815   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.543823   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:34.543830   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:34.543895   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:34.577893   78080 cri.go:89] found id: ""
	I0729 18:29:34.577918   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.577926   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:34.577931   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:34.577978   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:34.612703   78080 cri.go:89] found id: ""
	I0729 18:29:34.612735   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.612745   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:34.612752   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:34.612815   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:34.648167   78080 cri.go:89] found id: ""
	I0729 18:29:34.648197   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.648209   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:34.648219   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:34.648233   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:34.689821   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:34.689848   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:34.743902   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:34.743935   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:34.757400   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:34.757426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:34.833684   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:34.833706   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:34.833721   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:32.934207   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.936549   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:37.238618   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:39.239761   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:36.042461   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:38.543304   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:37.419270   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:37.433249   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:37.433301   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:37.469991   78080 cri.go:89] found id: ""
	I0729 18:29:37.470021   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.470031   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:37.470038   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:37.470098   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:37.504511   78080 cri.go:89] found id: ""
	I0729 18:29:37.504537   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.504548   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:37.504554   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:37.504612   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:37.545304   78080 cri.go:89] found id: ""
	I0729 18:29:37.545332   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.545342   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:37.545349   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:37.545406   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:37.584255   78080 cri.go:89] found id: ""
	I0729 18:29:37.584280   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.584287   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:37.584292   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:37.584345   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:37.620917   78080 cri.go:89] found id: ""
	I0729 18:29:37.620943   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.620951   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:37.620958   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:37.621022   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:37.659381   78080 cri.go:89] found id: ""
	I0729 18:29:37.659405   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.659414   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:37.659419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:37.659486   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:37.701337   78080 cri.go:89] found id: ""
	I0729 18:29:37.701360   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.701368   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:37.701373   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:37.701426   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:37.737142   78080 cri.go:89] found id: ""
	I0729 18:29:37.737168   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.737177   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:37.737186   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:37.737201   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:37.789951   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:37.789992   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:37.804759   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:37.804784   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:37.881777   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:37.881794   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:37.881808   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:37.970593   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:37.970625   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:40.511557   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:40.525472   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:40.525527   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:40.564227   78080 cri.go:89] found id: ""
	I0729 18:29:40.564253   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.564263   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:40.564270   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:40.564336   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:40.600384   78080 cri.go:89] found id: ""
	I0729 18:29:40.600409   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.600417   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:40.600423   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:40.600475   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:40.634819   78080 cri.go:89] found id: ""
	I0729 18:29:40.634843   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.634858   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:40.634866   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:40.634913   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:40.669963   78080 cri.go:89] found id: ""
	I0729 18:29:40.669991   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.669999   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:40.670006   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:40.670069   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:40.705680   78080 cri.go:89] found id: ""
	I0729 18:29:40.705705   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.705714   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:40.705719   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:40.705775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:40.743691   78080 cri.go:89] found id: ""
	I0729 18:29:40.743715   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.743725   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:40.743732   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:40.743820   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:40.783858   78080 cri.go:89] found id: ""
	I0729 18:29:40.783889   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.783898   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:40.783903   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:40.783953   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:40.821499   78080 cri.go:89] found id: ""
	I0729 18:29:40.821527   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.821537   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:40.821547   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:40.821562   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:40.874941   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:40.874972   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:40.888034   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:40.888057   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:40.960013   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:40.960032   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:40.960044   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:41.043013   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:41.043042   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:37.435119   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:39.435967   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:41.934232   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:41.739070   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.739497   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:40.543453   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.042528   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.583555   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:43.597120   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:43.597193   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:43.631500   78080 cri.go:89] found id: ""
	I0729 18:29:43.631526   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.631535   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:43.631542   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:43.631607   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:43.667003   78080 cri.go:89] found id: ""
	I0729 18:29:43.667029   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.667037   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:43.667042   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:43.667102   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:43.701471   78080 cri.go:89] found id: ""
	I0729 18:29:43.701502   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.701510   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:43.701515   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:43.701569   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:43.740037   78080 cri.go:89] found id: ""
	I0729 18:29:43.740058   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.740067   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:43.740074   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:43.740145   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:43.772584   78080 cri.go:89] found id: ""
	I0729 18:29:43.772610   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.772620   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:43.772626   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:43.772689   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:43.806340   78080 cri.go:89] found id: ""
	I0729 18:29:43.806382   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.806393   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:43.806401   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:43.806480   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:43.840085   78080 cri.go:89] found id: ""
	I0729 18:29:43.840109   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.840118   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:43.840133   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:43.840198   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:43.873412   78080 cri.go:89] found id: ""
	I0729 18:29:43.873438   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.873448   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:43.873458   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:43.873473   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:43.928762   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:43.928790   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:43.944129   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:43.944156   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:44.017330   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:44.017349   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:44.017361   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:44.106858   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:44.106915   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:46.651050   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:46.665253   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:46.665310   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:46.698846   78080 cri.go:89] found id: ""
	I0729 18:29:46.698871   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.698881   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:46.698888   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:46.698956   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:46.734354   78080 cri.go:89] found id: ""
	I0729 18:29:46.734395   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.734405   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:46.734413   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:46.734468   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:46.771978   78080 cri.go:89] found id: ""
	I0729 18:29:46.771999   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.772007   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:46.772012   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:46.772059   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:46.807231   78080 cri.go:89] found id: ""
	I0729 18:29:46.807255   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.807263   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:46.807272   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:46.807329   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:46.842257   78080 cri.go:89] found id: ""
	I0729 18:29:46.842278   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.842306   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:46.842312   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:46.842373   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:46.876287   78080 cri.go:89] found id: ""
	I0729 18:29:46.876309   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.876317   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:46.876323   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:46.876389   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:46.909695   78080 cri.go:89] found id: ""
	I0729 18:29:46.909719   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.909726   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:46.909731   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:46.909806   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:46.951768   78080 cri.go:89] found id: ""
	I0729 18:29:46.951798   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.951807   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:46.951815   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:46.951825   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:47.025467   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:47.025485   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:47.025497   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:47.106336   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:47.106391   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:47.145652   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:47.145682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:47.200857   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:47.200886   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:43.935210   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:46.434346   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:45.739606   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:48.240282   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:45.544442   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:48.042872   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:49.715401   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:49.729703   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:49.729776   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:49.770016   78080 cri.go:89] found id: ""
	I0729 18:29:49.770039   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.770062   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:49.770070   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:49.770127   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:49.805464   78080 cri.go:89] found id: ""
	I0729 18:29:49.805487   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.805495   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:49.805500   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:49.805560   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:49.838739   78080 cri.go:89] found id: ""
	I0729 18:29:49.838770   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.838782   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:49.838789   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:49.838861   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:49.881168   78080 cri.go:89] found id: ""
	I0729 18:29:49.881194   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.881202   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:49.881208   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:49.881269   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:49.919978   78080 cri.go:89] found id: ""
	I0729 18:29:49.919999   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.920006   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:49.920012   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:49.920079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:49.958971   78080 cri.go:89] found id: ""
	I0729 18:29:49.958996   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.959006   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:49.959013   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:49.959063   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:50.001253   78080 cri.go:89] found id: ""
	I0729 18:29:50.001281   78080 logs.go:276] 0 containers: []
	W0729 18:29:50.001291   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:50.001298   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:50.001362   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:50.038729   78080 cri.go:89] found id: ""
	I0729 18:29:50.038755   78080 logs.go:276] 0 containers: []
	W0729 18:29:50.038766   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:50.038776   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:50.038789   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:50.082540   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:50.082567   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:50.132372   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:50.132413   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:50.146806   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:50.146835   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:50.214495   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:50.214515   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:50.214532   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:48.435540   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.935475   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.240626   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.739158   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.044073   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.047924   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:54.542657   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.793987   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:52.808085   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:52.808149   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:52.844869   78080 cri.go:89] found id: ""
	I0729 18:29:52.844904   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.844917   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:52.844925   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:52.844986   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:52.878097   78080 cri.go:89] found id: ""
	I0729 18:29:52.878122   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.878135   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:52.878142   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:52.878191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:52.910843   78080 cri.go:89] found id: ""
	I0729 18:29:52.910884   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.910894   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:52.910902   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:52.910953   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:52.943233   78080 cri.go:89] found id: ""
	I0729 18:29:52.943257   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.943267   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:52.943274   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:52.943335   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:52.978354   78080 cri.go:89] found id: ""
	I0729 18:29:52.978402   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.978413   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:52.978423   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:52.978503   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:53.011238   78080 cri.go:89] found id: ""
	I0729 18:29:53.011266   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.011276   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:53.011283   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:53.011336   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:53.048787   78080 cri.go:89] found id: ""
	I0729 18:29:53.048817   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.048827   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:53.048834   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:53.048900   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:53.086108   78080 cri.go:89] found id: ""
	I0729 18:29:53.086135   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.086156   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:53.086176   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:53.086195   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:53.137552   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:53.137580   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:53.151308   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:53.151333   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:53.225968   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:53.225992   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:53.226004   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:53.308111   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:53.308145   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:55.850207   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:55.864003   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:55.864054   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:55.898109   78080 cri.go:89] found id: ""
	I0729 18:29:55.898134   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.898142   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:55.898148   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:55.898201   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:55.931616   78080 cri.go:89] found id: ""
	I0729 18:29:55.931643   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.931653   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:55.931660   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:55.931719   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:55.969034   78080 cri.go:89] found id: ""
	I0729 18:29:55.969063   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.969073   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:55.969080   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:55.969142   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:56.007552   78080 cri.go:89] found id: ""
	I0729 18:29:56.007576   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.007586   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:56.007592   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:56.007653   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:56.044342   78080 cri.go:89] found id: ""
	I0729 18:29:56.044367   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.044376   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:56.044382   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:56.044437   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:56.078352   78080 cri.go:89] found id: ""
	I0729 18:29:56.078396   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.078412   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:56.078420   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:56.078471   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:56.116505   78080 cri.go:89] found id: ""
	I0729 18:29:56.116532   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.116543   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:56.116551   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:56.116611   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:56.151493   78080 cri.go:89] found id: ""
	I0729 18:29:56.151516   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.151523   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:56.151530   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:56.151542   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:56.206170   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:56.206198   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:56.219658   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:56.219684   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:56.290279   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:56.290300   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:56.290312   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:56.371352   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:56.371382   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:53.434046   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:55.435343   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:55.239055   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:57.241032   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.740003   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:57.041745   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.042416   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:58.908793   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:58.922566   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:58.922626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:58.959375   78080 cri.go:89] found id: ""
	I0729 18:29:58.959397   78080 logs.go:276] 0 containers: []
	W0729 18:29:58.959404   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:58.959410   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:58.959459   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:58.993235   78080 cri.go:89] found id: ""
	I0729 18:29:58.993257   78080 logs.go:276] 0 containers: []
	W0729 18:29:58.993265   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:58.993271   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:58.993331   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:59.028186   78080 cri.go:89] found id: ""
	I0729 18:29:59.028212   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.028220   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:59.028225   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:59.028271   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:59.063589   78080 cri.go:89] found id: ""
	I0729 18:29:59.063619   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.063628   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:59.063635   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:59.063695   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:59.101116   78080 cri.go:89] found id: ""
	I0729 18:29:59.101142   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.101152   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:59.101158   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:59.101208   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:59.135288   78080 cri.go:89] found id: ""
	I0729 18:29:59.135314   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.135324   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:59.135332   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:59.135395   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:59.170520   78080 cri.go:89] found id: ""
	I0729 18:29:59.170549   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.170557   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:59.170562   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:59.170618   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:59.229796   78080 cri.go:89] found id: ""
	I0729 18:29:59.229825   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.229835   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:59.229843   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:59.229871   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:59.244654   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:59.244682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:59.321262   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:59.321286   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:59.321301   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:59.401423   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:59.401459   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:59.442916   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:59.442938   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:01.995116   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:02.008454   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:02.008516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:02.046412   78080 cri.go:89] found id: ""
	I0729 18:30:02.046431   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.046438   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:02.046443   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:02.046487   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:02.082444   78080 cri.go:89] found id: ""
	I0729 18:30:02.082466   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.082476   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:02.082482   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:02.082551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:02.116013   78080 cri.go:89] found id: ""
	I0729 18:30:02.116041   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.116052   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:02.116058   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:02.116127   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:02.155817   78080 cri.go:89] found id: ""
	I0729 18:30:02.155844   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.155854   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:02.155862   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:02.155914   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:02.195518   78080 cri.go:89] found id: ""
	I0729 18:30:02.195548   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.195556   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:02.195563   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:02.195624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:57.934058   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.934547   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.935238   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.742050   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:04.239758   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.043550   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:03.542544   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:02.228248   78080 cri.go:89] found id: ""
	I0729 18:30:02.228274   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.228283   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:02.228289   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:02.228370   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:02.262441   78080 cri.go:89] found id: ""
	I0729 18:30:02.262469   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.262479   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:02.262486   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:02.262546   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:02.296900   78080 cri.go:89] found id: ""
	I0729 18:30:02.296930   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.296937   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:02.296953   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:02.296965   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:02.352356   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:02.352389   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:02.366336   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:02.366365   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:02.441367   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:02.441389   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:02.441403   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:02.524134   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:02.524173   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:05.071581   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:05.085481   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:05.085535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:05.121610   78080 cri.go:89] found id: ""
	I0729 18:30:05.121636   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.121644   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:05.121652   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:05.121716   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:05.157382   78080 cri.go:89] found id: ""
	I0729 18:30:05.157406   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.157413   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:05.157418   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:05.157478   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:05.195552   78080 cri.go:89] found id: ""
	I0729 18:30:05.195582   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.195593   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:05.195600   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:05.195657   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:05.231071   78080 cri.go:89] found id: ""
	I0729 18:30:05.231095   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.231103   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:05.231108   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:05.231165   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:05.267445   78080 cri.go:89] found id: ""
	I0729 18:30:05.267474   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.267485   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:05.267493   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:05.267555   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:05.304258   78080 cri.go:89] found id: ""
	I0729 18:30:05.304279   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.304286   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:05.304291   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:05.304338   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:05.339155   78080 cri.go:89] found id: ""
	I0729 18:30:05.339176   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.339184   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:05.339190   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:05.339243   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:05.375291   78080 cri.go:89] found id: ""
	I0729 18:30:05.375328   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.375337   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:05.375346   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:05.375361   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:05.446196   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:05.446221   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:05.446236   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:05.529421   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:05.529457   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:05.570234   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:05.570269   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:05.629349   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:05.629391   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:04.434625   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:06.934246   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:06.239886   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.242421   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:05.543394   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.042242   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.151320   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:08.165983   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:08.166045   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:08.205703   78080 cri.go:89] found id: ""
	I0729 18:30:08.205726   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.205733   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:08.205738   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:08.205786   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:08.245919   78080 cri.go:89] found id: ""
	I0729 18:30:08.245946   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.245957   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:08.245964   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:08.246024   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:08.286595   78080 cri.go:89] found id: ""
	I0729 18:30:08.286621   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.286631   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:08.286638   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:08.286700   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:08.330032   78080 cri.go:89] found id: ""
	I0729 18:30:08.330060   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.330070   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:08.330077   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:08.330140   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:08.362535   78080 cri.go:89] found id: ""
	I0729 18:30:08.362567   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.362578   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:08.362586   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:08.362645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:08.397648   78080 cri.go:89] found id: ""
	I0729 18:30:08.397678   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.397688   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:08.397704   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:08.397766   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:08.433615   78080 cri.go:89] found id: ""
	I0729 18:30:08.433693   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.433716   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:08.433734   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:08.433809   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:08.465765   78080 cri.go:89] found id: ""
	I0729 18:30:08.465792   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.465803   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:08.465814   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:08.465829   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:08.536332   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:08.536360   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:08.536375   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:08.613737   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:08.613776   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:08.659707   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:08.659736   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:08.712702   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:08.712736   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:11.226660   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:11.240852   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:11.240919   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:11.277632   78080 cri.go:89] found id: ""
	I0729 18:30:11.277664   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.277675   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:11.277682   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:11.277751   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:11.312458   78080 cri.go:89] found id: ""
	I0729 18:30:11.312478   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.312485   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:11.312491   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:11.312551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:11.350375   78080 cri.go:89] found id: ""
	I0729 18:30:11.350406   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.350416   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:11.350424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:11.350486   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:11.389280   78080 cri.go:89] found id: ""
	I0729 18:30:11.389307   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.389317   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:11.389324   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:11.389382   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:11.424907   78080 cri.go:89] found id: ""
	I0729 18:30:11.424936   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.424944   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:11.424949   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:11.425009   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:11.480686   78080 cri.go:89] found id: ""
	I0729 18:30:11.480713   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.480720   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:11.480726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:11.480778   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:11.514831   78080 cri.go:89] found id: ""
	I0729 18:30:11.514857   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.514864   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:11.514870   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:11.514917   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:11.547930   78080 cri.go:89] found id: ""
	I0729 18:30:11.547955   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.547964   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:11.547974   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:11.547989   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:11.586068   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:11.586098   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:11.646857   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:11.646892   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:11.663549   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:11.663576   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:11.731362   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:11.731383   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:11.731397   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:08.934638   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:11.434765   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:10.738608   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:12.740637   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:10.042514   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:12.042731   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.042952   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.315531   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:14.330485   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:14.330544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:14.363403   78080 cri.go:89] found id: ""
	I0729 18:30:14.363433   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.363444   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:14.363451   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:14.363516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:14.401204   78080 cri.go:89] found id: ""
	I0729 18:30:14.401227   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.401234   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:14.401240   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:14.401301   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:14.436737   78080 cri.go:89] found id: ""
	I0729 18:30:14.436765   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.436775   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:14.436782   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:14.436844   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:14.471376   78080 cri.go:89] found id: ""
	I0729 18:30:14.471403   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.471411   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:14.471419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:14.471478   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:14.506883   78080 cri.go:89] found id: ""
	I0729 18:30:14.506914   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.506925   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:14.506932   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:14.506990   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:14.546444   78080 cri.go:89] found id: ""
	I0729 18:30:14.546469   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.546479   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:14.546486   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:14.546552   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:14.580282   78080 cri.go:89] found id: ""
	I0729 18:30:14.580313   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.580320   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:14.580326   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:14.580387   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:14.614185   78080 cri.go:89] found id: ""
	I0729 18:30:14.614210   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.614220   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:14.614231   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:14.614246   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:14.652588   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:14.652610   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:14.706056   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:14.706090   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:14.719332   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:14.719356   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:14.792087   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:14.792115   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:14.792136   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:13.934967   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:16.435238   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.740676   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:17.239466   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:19.239656   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:16.541564   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:18.547053   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:17.375639   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:17.389473   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:17.389535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:17.424485   78080 cri.go:89] found id: ""
	I0729 18:30:17.424513   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.424521   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:17.424527   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:17.424572   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:17.461100   78080 cri.go:89] found id: ""
	I0729 18:30:17.461129   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.461136   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:17.461141   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:17.461191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:17.494866   78080 cri.go:89] found id: ""
	I0729 18:30:17.494894   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.494902   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:17.494907   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:17.494983   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:17.529897   78080 cri.go:89] found id: ""
	I0729 18:30:17.529924   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.529934   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:17.529940   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:17.530002   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:17.569870   78080 cri.go:89] found id: ""
	I0729 18:30:17.569897   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.569905   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:17.569910   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:17.569958   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:17.605324   78080 cri.go:89] found id: ""
	I0729 18:30:17.605364   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.605384   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:17.605392   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:17.605457   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:17.640552   78080 cri.go:89] found id: ""
	I0729 18:30:17.640583   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.640595   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:17.640602   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:17.640668   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:17.679769   78080 cri.go:89] found id: ""
	I0729 18:30:17.679800   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.679808   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:17.679827   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:17.679843   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:17.757782   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:17.757814   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:17.803850   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:17.803878   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:17.857987   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:17.858017   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:17.871062   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:17.871086   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:17.940456   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:20.441171   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:20.454752   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:20.454824   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:20.490744   78080 cri.go:89] found id: ""
	I0729 18:30:20.490773   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.490783   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:20.490791   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:20.490853   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:20.524406   78080 cri.go:89] found id: ""
	I0729 18:30:20.524437   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.524448   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:20.524463   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:20.524515   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:20.559225   78080 cri.go:89] found id: ""
	I0729 18:30:20.559257   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.559268   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:20.559275   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:20.559337   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:20.595297   78080 cri.go:89] found id: ""
	I0729 18:30:20.595324   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.595355   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:20.595364   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:20.595436   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:20.632176   78080 cri.go:89] found id: ""
	I0729 18:30:20.632204   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.632215   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:20.632222   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:20.632282   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:20.676600   78080 cri.go:89] found id: ""
	I0729 18:30:20.676625   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.676632   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:20.676638   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:20.676734   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:20.717920   78080 cri.go:89] found id: ""
	I0729 18:30:20.717945   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.717955   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:20.717966   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:20.718021   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:20.756217   78080 cri.go:89] found id: ""
	I0729 18:30:20.756243   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.756253   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:20.756262   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:20.756277   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:20.837150   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:20.837189   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:20.876023   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:20.876050   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:20.932402   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:20.932429   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:20.947422   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:20.947454   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:21.022698   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:18.934790   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.434992   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.242999   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.739073   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.042689   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.042794   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.523141   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:23.538019   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:23.538098   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:23.576953   78080 cri.go:89] found id: ""
	I0729 18:30:23.576979   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.576991   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:23.576998   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:23.577060   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:23.613052   78080 cri.go:89] found id: ""
	I0729 18:30:23.613083   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.613094   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:23.613100   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:23.613170   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:23.648694   78080 cri.go:89] found id: ""
	I0729 18:30:23.648717   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.648725   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:23.648730   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:23.648775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:23.680939   78080 cri.go:89] found id: ""
	I0729 18:30:23.680965   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.680972   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:23.680977   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:23.681032   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:23.716529   78080 cri.go:89] found id: ""
	I0729 18:30:23.716556   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.716564   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:23.716569   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:23.716628   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:23.756833   78080 cri.go:89] found id: ""
	I0729 18:30:23.756860   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.756868   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:23.756873   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:23.756918   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:23.796436   78080 cri.go:89] found id: ""
	I0729 18:30:23.796460   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.796467   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:23.796472   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:23.796519   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:23.839877   78080 cri.go:89] found id: ""
	I0729 18:30:23.839906   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.839914   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:23.839922   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:23.839934   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:23.879423   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:23.879447   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:23.928379   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:23.928408   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:23.942639   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:23.942669   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:24.014068   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:24.014095   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:24.014110   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:26.597923   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:26.610877   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:26.610945   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:26.647550   78080 cri.go:89] found id: ""
	I0729 18:30:26.647579   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.647590   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:26.647598   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:26.647655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:26.681552   78080 cri.go:89] found id: ""
	I0729 18:30:26.681581   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.681589   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:26.681595   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:26.681660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:26.714475   78080 cri.go:89] found id: ""
	I0729 18:30:26.714503   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.714513   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:26.714519   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:26.714588   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:26.748671   78080 cri.go:89] found id: ""
	I0729 18:30:26.748697   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.748707   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:26.748714   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:26.748775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:26.781380   78080 cri.go:89] found id: ""
	I0729 18:30:26.781406   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.781421   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:26.781429   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:26.781483   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:26.815201   78080 cri.go:89] found id: ""
	I0729 18:30:26.815230   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.815243   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:26.815251   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:26.815318   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:26.848600   78080 cri.go:89] found id: ""
	I0729 18:30:26.848628   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.848637   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:26.848644   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:26.848724   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:26.883828   78080 cri.go:89] found id: ""
	I0729 18:30:26.883872   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.883883   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:26.883893   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:26.883908   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:26.936955   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:26.936987   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:26.952212   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:26.952238   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:27.019389   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:27.019413   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:27.019426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:27.095654   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:27.095682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:23.935397   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:26.435231   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:26.238749   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:28.239699   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:25.044320   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:27.542022   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:29.542274   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:29.637269   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:29.652138   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:29.652211   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:29.691063   78080 cri.go:89] found id: ""
	I0729 18:30:29.691094   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.691104   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:29.691111   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:29.691173   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:29.725188   78080 cri.go:89] found id: ""
	I0729 18:30:29.725224   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.725232   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:29.725240   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:29.725308   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:29.764118   78080 cri.go:89] found id: ""
	I0729 18:30:29.764149   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.764159   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:29.764167   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:29.764232   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:29.797884   78080 cri.go:89] found id: ""
	I0729 18:30:29.797909   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.797919   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:29.797927   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:29.797989   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:29.838784   78080 cri.go:89] found id: ""
	I0729 18:30:29.838808   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.838815   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:29.838821   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:29.838885   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:29.872394   78080 cri.go:89] found id: ""
	I0729 18:30:29.872420   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.872427   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:29.872433   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:29.872491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:29.908966   78080 cri.go:89] found id: ""
	I0729 18:30:29.908995   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.909012   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:29.909020   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:29.909081   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:29.946322   78080 cri.go:89] found id: ""
	I0729 18:30:29.946344   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.946352   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:29.946371   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:29.946386   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:30.019133   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:30.019166   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:30.019179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:30.096499   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:30.096532   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:30.136487   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:30.136519   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:30.187341   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:30.187374   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:28.435472   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:30.934817   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:30.739101   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.742029   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.042850   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:34.042919   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.703546   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:32.716981   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:32.717042   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:32.753275   78080 cri.go:89] found id: ""
	I0729 18:30:32.753307   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.753318   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:32.753326   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:32.753393   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:32.789075   78080 cri.go:89] found id: ""
	I0729 18:30:32.789105   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.789116   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:32.789123   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:32.789185   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:32.822945   78080 cri.go:89] found id: ""
	I0729 18:30:32.822971   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.822979   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:32.822984   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:32.823033   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:32.856523   78080 cri.go:89] found id: ""
	I0729 18:30:32.856577   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.856589   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:32.856597   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:32.856661   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:32.895768   78080 cri.go:89] found id: ""
	I0729 18:30:32.895798   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.895810   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:32.895817   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:32.895876   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:32.934990   78080 cri.go:89] found id: ""
	I0729 18:30:32.935030   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.935042   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:32.935054   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:32.935132   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:32.970924   78080 cri.go:89] found id: ""
	I0729 18:30:32.970949   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.970957   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:32.970964   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:32.971022   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:33.004133   78080 cri.go:89] found id: ""
	I0729 18:30:33.004164   78080 logs.go:276] 0 containers: []
	W0729 18:30:33.004173   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:33.004182   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:33.004202   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:33.043432   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:33.043467   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:33.095517   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:33.095554   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:33.108859   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:33.108889   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:33.180661   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:33.180681   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:33.180696   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:35.763324   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:35.777060   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:35.777138   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:35.812601   78080 cri.go:89] found id: ""
	I0729 18:30:35.812636   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.812647   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:35.812654   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:35.812719   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:35.848116   78080 cri.go:89] found id: ""
	I0729 18:30:35.848161   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.848172   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:35.848179   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:35.848240   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:35.895786   78080 cri.go:89] found id: ""
	I0729 18:30:35.895817   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.895829   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:35.895837   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:35.895911   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:35.936753   78080 cri.go:89] found id: ""
	I0729 18:30:35.936780   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.936787   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:35.936794   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:35.936848   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:35.971321   78080 cri.go:89] found id: ""
	I0729 18:30:35.971349   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.971358   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:35.971371   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:35.971434   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:36.018702   78080 cri.go:89] found id: ""
	I0729 18:30:36.018725   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.018732   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:36.018737   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:36.018792   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:36.054829   78080 cri.go:89] found id: ""
	I0729 18:30:36.054865   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.054875   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:36.054882   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:36.054948   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:36.087456   78080 cri.go:89] found id: ""
	I0729 18:30:36.087483   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.087492   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:36.087500   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:36.087512   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:36.140919   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:36.140951   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:36.155581   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:36.155614   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:36.227617   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:36.227642   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:36.227669   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:36.304610   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:36.304651   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:32.935270   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:34.935362   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:35.239258   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:37.242161   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:39.739031   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:36.043489   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:38.542041   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:38.843099   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:38.857571   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:38.857626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:38.890760   78080 cri.go:89] found id: ""
	I0729 18:30:38.890790   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.890801   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:38.890809   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:38.890884   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:38.932701   78080 cri.go:89] found id: ""
	I0729 18:30:38.932738   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.932748   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:38.932755   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:38.932812   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:38.967379   78080 cri.go:89] found id: ""
	I0729 18:30:38.967406   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.967416   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:38.967430   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:38.967490   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:39.000419   78080 cri.go:89] found id: ""
	I0729 18:30:39.000450   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.000459   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:39.000466   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:39.000528   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:39.033764   78080 cri.go:89] found id: ""
	I0729 18:30:39.033793   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.033802   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:39.033807   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:39.033857   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:39.070904   78080 cri.go:89] found id: ""
	I0729 18:30:39.070933   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.070944   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:39.070951   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:39.071010   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:39.107444   78080 cri.go:89] found id: ""
	I0729 18:30:39.107471   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.107480   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:39.107488   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:39.107549   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:39.141392   78080 cri.go:89] found id: ""
	I0729 18:30:39.141423   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.141436   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:39.141449   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:39.141464   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:39.154874   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:39.154905   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:39.229370   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:39.229396   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:39.229413   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:39.310508   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:39.310538   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:39.352547   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:39.352569   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:41.908463   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:41.922132   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:41.922209   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:41.960404   78080 cri.go:89] found id: ""
	I0729 18:30:41.960431   78080 logs.go:276] 0 containers: []
	W0729 18:30:41.960439   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:41.960444   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:41.960498   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:41.994082   78080 cri.go:89] found id: ""
	I0729 18:30:41.994110   78080 logs.go:276] 0 containers: []
	W0729 18:30:41.994117   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:41.994123   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:41.994177   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:42.030301   78080 cri.go:89] found id: ""
	I0729 18:30:42.030322   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.030330   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:42.030336   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:42.030401   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:42.064310   78080 cri.go:89] found id: ""
	I0729 18:30:42.064339   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.064349   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:42.064356   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:42.064413   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:42.097705   78080 cri.go:89] found id: ""
	I0729 18:30:42.097738   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.097748   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:42.097761   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:42.097819   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:42.133254   78080 cri.go:89] found id: ""
	I0729 18:30:42.133282   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.133292   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:42.133299   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:42.133361   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:42.170028   78080 cri.go:89] found id: ""
	I0729 18:30:42.170054   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.170063   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:42.170075   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:42.170141   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:42.205680   78080 cri.go:89] found id: ""
	I0729 18:30:42.205712   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.205723   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:42.205736   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:42.205749   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:37.442211   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:39.934866   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:41.935293   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:42.240035   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:41.041897   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:43.042300   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:42.246322   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:42.246350   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:42.300852   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:42.300884   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:42.316306   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:42.316333   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:42.389898   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:42.389920   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:42.389934   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:44.971238   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:44.984796   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:44.984846   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:45.021842   78080 cri.go:89] found id: ""
	I0729 18:30:45.021868   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.021877   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:45.021885   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:45.021958   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:45.059353   78080 cri.go:89] found id: ""
	I0729 18:30:45.059377   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.059387   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:45.059394   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:45.059456   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:45.094867   78080 cri.go:89] found id: ""
	I0729 18:30:45.094900   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.094911   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:45.094918   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:45.094974   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:45.128589   78080 cri.go:89] found id: ""
	I0729 18:30:45.128614   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.128622   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:45.128628   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:45.128671   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:45.160137   78080 cri.go:89] found id: ""
	I0729 18:30:45.160165   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.160172   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:45.160177   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:45.160228   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:45.205757   78080 cri.go:89] found id: ""
	I0729 18:30:45.205780   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.205787   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:45.205793   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:45.205840   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:45.250056   78080 cri.go:89] found id: ""
	I0729 18:30:45.250084   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.250091   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:45.250096   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:45.250179   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:45.285349   78080 cri.go:89] found id: ""
	I0729 18:30:45.285372   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.285380   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:45.285389   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:45.285401   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:45.364188   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:45.364218   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:45.412638   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:45.412660   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:45.467713   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:45.467745   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:45.483811   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:45.483835   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:45.564866   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:44.434921   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:46.934237   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:44.740648   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:47.239253   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:49.240229   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:45.043415   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:47.542757   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:49.543251   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:48.065579   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:48.079441   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:48.079511   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:48.115540   78080 cri.go:89] found id: ""
	I0729 18:30:48.115569   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.115578   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:48.115586   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:48.115670   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:48.151810   78080 cri.go:89] found id: ""
	I0729 18:30:48.151834   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.151841   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:48.151847   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:48.151913   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:48.187459   78080 cri.go:89] found id: ""
	I0729 18:30:48.187490   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.187500   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:48.187508   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:48.187568   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:48.226804   78080 cri.go:89] found id: ""
	I0729 18:30:48.226835   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.226846   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:48.226853   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:48.226916   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:48.260413   78080 cri.go:89] found id: ""
	I0729 18:30:48.260439   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.260448   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:48.260455   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:48.260517   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:48.296719   78080 cri.go:89] found id: ""
	I0729 18:30:48.296743   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.296751   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:48.296756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:48.296806   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:48.331969   78080 cri.go:89] found id: ""
	I0729 18:30:48.331995   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.332002   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:48.332008   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:48.332055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:48.370593   78080 cri.go:89] found id: ""
	I0729 18:30:48.370618   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.370626   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:48.370634   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:48.370645   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:48.410653   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:48.410679   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:48.465467   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:48.465503   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:48.480025   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:48.480053   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:48.557806   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:48.557824   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:48.557840   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:51.140743   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:51.153970   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:51.154046   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:51.187826   78080 cri.go:89] found id: ""
	I0729 18:30:51.187851   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.187862   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:51.187868   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:51.187922   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:51.226140   78080 cri.go:89] found id: ""
	I0729 18:30:51.226172   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.226182   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:51.226189   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:51.226255   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:51.262321   78080 cri.go:89] found id: ""
	I0729 18:30:51.262349   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.262357   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:51.262378   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:51.262440   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:51.295356   78080 cri.go:89] found id: ""
	I0729 18:30:51.295383   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.295395   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:51.295403   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:51.295467   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:51.328320   78080 cri.go:89] found id: ""
	I0729 18:30:51.328349   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.328361   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:51.328367   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:51.328424   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:51.364202   78080 cri.go:89] found id: ""
	I0729 18:30:51.364233   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.364242   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:51.364249   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:51.364313   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:51.405500   78080 cri.go:89] found id: ""
	I0729 18:30:51.405529   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.405538   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:51.405544   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:51.405606   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:51.443519   78080 cri.go:89] found id: ""
	I0729 18:30:51.443541   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.443548   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:51.443556   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:51.443567   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:51.495560   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:51.495599   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:51.512152   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:51.512178   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:51.590972   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:51.590992   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:51.591021   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:51.688717   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:51.688757   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:48.934577   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:51.437173   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:51.739680   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.238626   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:52.044254   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.545288   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.256011   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:54.270602   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:54.270653   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:54.311547   78080 cri.go:89] found id: ""
	I0729 18:30:54.311574   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.311584   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:54.311592   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:54.311655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:54.347559   78080 cri.go:89] found id: ""
	I0729 18:30:54.347591   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.347602   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:54.347610   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:54.347675   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:54.382180   78080 cri.go:89] found id: ""
	I0729 18:30:54.382205   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.382212   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:54.382217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:54.382264   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:54.415560   78080 cri.go:89] found id: ""
	I0729 18:30:54.415587   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.415594   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:54.415600   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:54.415655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:54.450313   78080 cri.go:89] found id: ""
	I0729 18:30:54.450341   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.450351   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:54.450372   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:54.450439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:54.484649   78080 cri.go:89] found id: ""
	I0729 18:30:54.484678   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.484687   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:54.484694   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:54.484741   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:54.520170   78080 cri.go:89] found id: ""
	I0729 18:30:54.520204   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.520212   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:54.520220   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:54.520270   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:54.562724   78080 cri.go:89] found id: ""
	I0729 18:30:54.562753   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.562762   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:54.562772   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:54.562788   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:54.617461   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:54.617498   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:54.630970   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:54.630993   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:54.699332   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:54.699353   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:54.699366   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:54.779240   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:54.779276   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:53.934151   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:56.434549   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:56.239554   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:58.239583   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:57.041845   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:59.042164   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:57.318673   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:57.332789   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:57.332845   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:57.370434   78080 cri.go:89] found id: ""
	I0729 18:30:57.370461   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.370486   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:57.370492   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:57.370547   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:57.420694   78080 cri.go:89] found id: ""
	I0729 18:30:57.420724   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.420735   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:57.420742   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:57.420808   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:57.469245   78080 cri.go:89] found id: ""
	I0729 18:30:57.469271   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.469282   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:57.469288   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:57.469355   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:57.524937   78080 cri.go:89] found id: ""
	I0729 18:30:57.524963   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.524970   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:57.524976   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:57.525031   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:57.566803   78080 cri.go:89] found id: ""
	I0729 18:30:57.566830   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.566840   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:57.566847   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:57.566910   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:57.602786   78080 cri.go:89] found id: ""
	I0729 18:30:57.602814   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.602821   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:57.602826   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:57.602891   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:57.639319   78080 cri.go:89] found id: ""
	I0729 18:30:57.639347   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.639355   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:57.639361   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:57.639408   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:57.672580   78080 cri.go:89] found id: ""
	I0729 18:30:57.672610   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.672621   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:57.672632   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:57.672647   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:57.751550   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:57.751572   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:57.751586   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:57.840057   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:57.840097   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:57.884698   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:57.884737   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:57.944468   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:57.944497   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:00.459605   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:00.473079   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:00.473138   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:00.508492   78080 cri.go:89] found id: ""
	I0729 18:31:00.508525   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.508536   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:00.508543   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:00.508604   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:00.544844   78080 cri.go:89] found id: ""
	I0729 18:31:00.544875   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.544886   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:00.544899   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:00.544960   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:00.578402   78080 cri.go:89] found id: ""
	I0729 18:31:00.578432   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.578443   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:00.578450   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:00.578508   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:00.611886   78080 cri.go:89] found id: ""
	I0729 18:31:00.611913   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.611922   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:00.611928   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:00.611989   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:00.649126   78080 cri.go:89] found id: ""
	I0729 18:31:00.649153   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.649162   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:00.649168   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:00.649229   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:00.686534   78080 cri.go:89] found id: ""
	I0729 18:31:00.686561   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.686571   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:00.686578   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:00.686639   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:00.718656   78080 cri.go:89] found id: ""
	I0729 18:31:00.718680   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.718690   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:00.718696   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:00.718755   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:00.752740   78080 cri.go:89] found id: ""
	I0729 18:31:00.752766   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.752776   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:00.752786   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:00.752800   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:00.804293   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:00.804323   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:00.817988   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:00.818010   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:00.892178   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:00.892210   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:00.892231   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:00.973164   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:00.973199   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:58.434888   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:00.934518   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:00.239908   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:02.240038   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:04.240420   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:01.542080   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:03.542877   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:04.036213   77627 pod_ready.go:81] duration metric: took 4m0.000109353s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" ...
	E0729 18:31:04.036235   77627 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 18:31:04.036250   77627 pod_ready.go:38] duration metric: took 4m10.564329435s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:04.036294   77627 kubeadm.go:597] duration metric: took 4m18.357564209s to restartPrimaryControlPlane
	W0729 18:31:04.036359   77627 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:31:04.036388   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:31:03.512105   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:03.526536   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:03.526602   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:03.561579   78080 cri.go:89] found id: ""
	I0729 18:31:03.561604   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.561614   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:03.561621   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:03.561681   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:03.603995   78080 cri.go:89] found id: ""
	I0729 18:31:03.604019   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.604028   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:03.604033   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:03.604079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:03.640879   78080 cri.go:89] found id: ""
	I0729 18:31:03.640902   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.640910   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:03.640917   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:03.640971   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:03.675262   78080 cri.go:89] found id: ""
	I0729 18:31:03.675288   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.675296   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:03.675302   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:03.675349   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:03.708094   78080 cri.go:89] found id: ""
	I0729 18:31:03.708128   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.708137   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:03.708142   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:03.708190   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:03.748262   78080 cri.go:89] found id: ""
	I0729 18:31:03.748287   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.748298   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:03.748304   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:03.748360   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:03.789758   78080 cri.go:89] found id: ""
	I0729 18:31:03.789788   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.789800   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:03.789806   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:03.789893   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:03.829253   78080 cri.go:89] found id: ""
	I0729 18:31:03.829280   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.829291   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:03.829299   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:03.829317   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:03.883012   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:03.883044   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:03.899264   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:03.899294   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:03.970241   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:03.970261   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:03.970274   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:04.056205   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:04.056244   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:06.604919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:06.619163   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:06.619242   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:06.656939   78080 cri.go:89] found id: ""
	I0729 18:31:06.656970   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.656982   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:06.656989   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:06.657075   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:06.692577   78080 cri.go:89] found id: ""
	I0729 18:31:06.692608   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.692624   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:06.692632   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:06.692695   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:06.730045   78080 cri.go:89] found id: ""
	I0729 18:31:06.730077   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.730088   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:06.730096   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:06.730179   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:06.771794   78080 cri.go:89] found id: ""
	I0729 18:31:06.771820   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.771830   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:06.771838   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:06.771905   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:06.806149   78080 cri.go:89] found id: ""
	I0729 18:31:06.806177   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.806187   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:06.806194   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:06.806252   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:06.851875   78080 cri.go:89] found id: ""
	I0729 18:31:06.851905   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.851923   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:06.851931   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:06.851996   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:06.890335   78080 cri.go:89] found id: ""
	I0729 18:31:06.890382   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.890393   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:06.890399   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:06.890460   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:06.928873   78080 cri.go:89] found id: ""
	I0729 18:31:06.928902   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.928912   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:06.928922   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:06.928935   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:06.944269   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:06.944295   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:07.011658   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:07.011682   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:07.011697   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:07.109899   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:07.109948   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:07.154569   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:07.154600   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:02.935054   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:05.434752   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:06.242994   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:08.738448   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:09.709101   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:09.722387   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:09.722461   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:09.760443   78080 cri.go:89] found id: ""
	I0729 18:31:09.760471   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.760481   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:09.760488   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:09.760551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:09.796177   78080 cri.go:89] found id: ""
	I0729 18:31:09.796200   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.796209   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:09.796214   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:09.796264   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:09.831955   78080 cri.go:89] found id: ""
	I0729 18:31:09.831983   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.831990   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:09.831995   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:09.832055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:09.863913   78080 cri.go:89] found id: ""
	I0729 18:31:09.863939   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.863949   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:09.863956   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:09.864014   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:09.897553   78080 cri.go:89] found id: ""
	I0729 18:31:09.897575   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.897583   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:09.897588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:09.897645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:09.935203   78080 cri.go:89] found id: ""
	I0729 18:31:09.935221   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.935228   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:09.935238   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:09.935296   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:09.971098   78080 cri.go:89] found id: ""
	I0729 18:31:09.971125   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.971135   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:09.971142   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:09.971224   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:10.006760   78080 cri.go:89] found id: ""
	I0729 18:31:10.006794   78080 logs.go:276] 0 containers: []
	W0729 18:31:10.006804   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:10.006815   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:10.006830   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:10.056037   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:10.056066   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:10.070633   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:10.070660   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:10.139953   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:10.139983   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:10.140002   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:10.220748   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:10.220781   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:07.436020   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:09.934218   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:11.934977   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:10.740109   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:13.239440   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:12.766391   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:12.779837   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:12.779889   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:12.813910   78080 cri.go:89] found id: ""
	I0729 18:31:12.813941   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.813951   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:12.813959   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:12.814008   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:12.848811   78080 cri.go:89] found id: ""
	I0729 18:31:12.848854   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.848865   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:12.848872   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:12.848927   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:12.884740   78080 cri.go:89] found id: ""
	I0729 18:31:12.884769   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.884780   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:12.884786   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:12.884833   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:12.923826   78080 cri.go:89] found id: ""
	I0729 18:31:12.923859   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.923870   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:12.923878   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:12.923930   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:12.959127   78080 cri.go:89] found id: ""
	I0729 18:31:12.959157   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.959168   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:12.959175   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:12.959245   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:12.994384   78080 cri.go:89] found id: ""
	I0729 18:31:12.994417   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.994430   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:12.994439   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:12.994506   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:13.027854   78080 cri.go:89] found id: ""
	I0729 18:31:13.027883   78080 logs.go:276] 0 containers: []
	W0729 18:31:13.027892   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:13.027897   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:13.027951   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:13.062270   78080 cri.go:89] found id: ""
	I0729 18:31:13.062300   78080 logs.go:276] 0 containers: []
	W0729 18:31:13.062310   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:13.062321   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:13.062334   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:13.114473   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:13.114500   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:13.127820   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:13.127845   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:13.195830   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:13.195848   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:13.195862   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:13.281711   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:13.281748   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:15.824456   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:15.837532   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:15.837587   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:15.871706   78080 cri.go:89] found id: ""
	I0729 18:31:15.871739   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.871750   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:15.871757   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:15.871817   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:15.906882   78080 cri.go:89] found id: ""
	I0729 18:31:15.906905   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.906912   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:15.906917   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:15.906976   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:15.943015   78080 cri.go:89] found id: ""
	I0729 18:31:15.943043   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.943057   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:15.943065   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:15.943126   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:15.980501   78080 cri.go:89] found id: ""
	I0729 18:31:15.980528   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.980536   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:15.980542   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:15.980588   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:16.014148   78080 cri.go:89] found id: ""
	I0729 18:31:16.014176   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.014183   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:16.014189   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:16.014236   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:16.048296   78080 cri.go:89] found id: ""
	I0729 18:31:16.048319   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.048326   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:16.048334   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:16.048392   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:16.084328   78080 cri.go:89] found id: ""
	I0729 18:31:16.084350   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.084358   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:16.084363   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:16.084411   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:16.120048   78080 cri.go:89] found id: ""
	I0729 18:31:16.120076   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.120084   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:16.120092   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:16.120105   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:16.173476   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:16.173503   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:16.190200   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:16.190232   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:16.261993   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:16.262014   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:16.262026   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:16.340298   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:16.340331   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:14.434706   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:16.936150   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:15.739493   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:18.239834   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:18.883152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:18.897292   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:18.897360   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:18.931276   78080 cri.go:89] found id: ""
	I0729 18:31:18.931303   78080 logs.go:276] 0 containers: []
	W0729 18:31:18.931313   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:18.931321   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:18.931379   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:18.975803   78080 cri.go:89] found id: ""
	I0729 18:31:18.975832   78080 logs.go:276] 0 containers: []
	W0729 18:31:18.975843   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:18.975853   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:18.975912   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:19.012920   78080 cri.go:89] found id: ""
	I0729 18:31:19.012951   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.012963   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:19.012970   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:19.013031   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:19.047640   78080 cri.go:89] found id: ""
	I0729 18:31:19.047667   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.047679   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:19.047687   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:19.047749   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:19.082495   78080 cri.go:89] found id: ""
	I0729 18:31:19.082522   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.082533   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:19.082540   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:19.082591   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:19.117988   78080 cri.go:89] found id: ""
	I0729 18:31:19.118016   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.118027   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:19.118034   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:19.118096   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:19.153725   78080 cri.go:89] found id: ""
	I0729 18:31:19.153753   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.153764   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:19.153771   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:19.153836   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:19.192827   78080 cri.go:89] found id: ""
	I0729 18:31:19.192857   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.192868   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:19.192879   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:19.192894   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:19.208802   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:19.208833   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:19.285877   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:19.285897   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:19.285909   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:19.366563   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:19.366598   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:19.404563   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:19.404590   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:21.958449   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:21.971674   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:21.971739   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:22.006231   78080 cri.go:89] found id: ""
	I0729 18:31:22.006253   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.006261   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:22.006266   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:22.006314   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:22.042575   78080 cri.go:89] found id: ""
	I0729 18:31:22.042599   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.042609   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:22.042616   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:22.042679   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:22.079446   78080 cri.go:89] found id: ""
	I0729 18:31:22.079471   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.079482   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:22.079489   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:22.079554   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:22.115940   78080 cri.go:89] found id: ""
	I0729 18:31:22.115967   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.115976   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:22.115984   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:22.116055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:22.149420   78080 cri.go:89] found id: ""
	I0729 18:31:22.149447   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.149456   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:22.149461   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:22.149511   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:22.182992   78080 cri.go:89] found id: ""
	I0729 18:31:22.183019   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.183027   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:22.183032   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:22.183090   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:22.218441   78080 cri.go:89] found id: ""
	I0729 18:31:22.218474   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.218487   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:22.218497   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:22.218564   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:19.434020   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:21.434806   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:20.739308   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:22.741502   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:22.263135   78080 cri.go:89] found id: ""
	I0729 18:31:22.263164   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.263173   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:22.263183   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:22.263198   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:22.319010   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:22.319049   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:22.333151   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:22.333179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:22.404661   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:22.404683   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:22.404706   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:22.488497   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:22.488537   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:25.032215   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:25.045114   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:25.045191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:25.082244   78080 cri.go:89] found id: ""
	I0729 18:31:25.082278   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.082289   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:25.082299   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:25.082388   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:25.118295   78080 cri.go:89] found id: ""
	I0729 18:31:25.118318   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.118325   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:25.118331   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:25.118395   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:25.157948   78080 cri.go:89] found id: ""
	I0729 18:31:25.157974   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.157984   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:25.157992   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:25.158054   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:25.194708   78080 cri.go:89] found id: ""
	I0729 18:31:25.194734   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.194743   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:25.194751   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:25.194813   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:25.235923   78080 cri.go:89] found id: ""
	I0729 18:31:25.235952   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.235962   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:25.235969   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:25.236032   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:25.271316   78080 cri.go:89] found id: ""
	I0729 18:31:25.271342   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.271353   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:25.271360   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:25.271422   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:25.309399   78080 cri.go:89] found id: ""
	I0729 18:31:25.309427   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.309438   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:25.309446   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:25.309503   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:25.347979   78080 cri.go:89] found id: ""
	I0729 18:31:25.348009   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.348021   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:25.348031   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:25.348046   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:25.400785   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:25.400812   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:25.413891   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:25.413915   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:25.487721   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:25.487752   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:25.487767   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:25.575500   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:25.575531   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:23.935200   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:26.434289   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:25.240961   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:27.738838   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:27.738866   77859 pod_ready.go:81] duration metric: took 4m0.005785253s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	E0729 18:31:27.738877   77859 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 18:31:27.738887   77859 pod_ready.go:38] duration metric: took 4m4.550102816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:27.738903   77859 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:31:27.738934   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:27.738991   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:27.798686   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:27.798710   77859 cri.go:89] found id: ""
	I0729 18:31:27.798717   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:27.798774   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.804769   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:27.804827   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:27.849829   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:27.849849   77859 cri.go:89] found id: ""
	I0729 18:31:27.849857   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:27.849909   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.854472   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:27.854540   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:27.891637   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:27.891659   77859 cri.go:89] found id: ""
	I0729 18:31:27.891668   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:27.891715   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.896663   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:27.896713   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:27.941948   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:27.941968   77859 cri.go:89] found id: ""
	I0729 18:31:27.941976   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:27.942018   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.946770   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:27.946821   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:27.988118   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:27.988139   77859 cri.go:89] found id: ""
	I0729 18:31:27.988147   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:27.988193   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.992474   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:27.992535   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:28.032779   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:28.032801   77859 cri.go:89] found id: ""
	I0729 18:31:28.032811   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:28.032859   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.037791   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:28.037838   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:28.081087   77859 cri.go:89] found id: ""
	I0729 18:31:28.081115   77859 logs.go:276] 0 containers: []
	W0729 18:31:28.081124   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:28.081131   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:28.081183   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:28.123906   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:28.123927   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:28.123933   77859 cri.go:89] found id: ""
	I0729 18:31:28.123940   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:28.123979   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.128737   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.133127   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:28.133201   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:28.182950   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:28.182985   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:28.241873   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:28.241914   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:28.391355   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:28.391389   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:28.447637   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:28.447671   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:28.496815   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:28.496848   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:28.540617   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:28.540651   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:29.063074   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:29.063116   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:29.123348   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:29.123378   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:29.137340   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:29.137365   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:29.174775   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:29.174810   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:29.227526   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:29.227560   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:29.281814   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:29.281844   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:28.121761   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:28.136756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:28.136813   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:28.175461   78080 cri.go:89] found id: ""
	I0729 18:31:28.175491   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.175502   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:28.175509   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:28.175567   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:28.215024   78080 cri.go:89] found id: ""
	I0729 18:31:28.215046   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.215055   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:28.215060   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:28.215122   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:28.253999   78080 cri.go:89] found id: ""
	I0729 18:31:28.254023   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.254031   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:28.254037   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:28.254090   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:28.287902   78080 cri.go:89] found id: ""
	I0729 18:31:28.287929   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.287940   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:28.287948   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:28.288006   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:28.322390   78080 cri.go:89] found id: ""
	I0729 18:31:28.322422   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.322433   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:28.322441   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:28.322500   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:28.356951   78080 cri.go:89] found id: ""
	I0729 18:31:28.356980   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.356991   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:28.356999   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:28.357060   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:28.393439   78080 cri.go:89] found id: ""
	I0729 18:31:28.393461   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.393471   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:28.393477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:28.393535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:28.431827   78080 cri.go:89] found id: ""
	I0729 18:31:28.431858   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.431868   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:28.431878   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:28.431892   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:28.509279   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:28.509315   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:28.564036   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:28.564064   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:28.626970   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:28.627000   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:28.641417   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:28.641446   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:28.713406   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:31.213942   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:31.228942   78080 kubeadm.go:597] duration metric: took 4m3.040952507s to restartPrimaryControlPlane
	W0729 18:31:31.229020   78080 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:31:31.229042   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:31:31.696335   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:31.711230   78080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:31:31.720924   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:31:31.730348   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:31:31.730378   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:31:31.730418   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:31:31.739761   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:31:31.739810   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:31:31.749021   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:31:31.758107   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:31:31.758155   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:31:31.768326   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:31:31.777347   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:31:31.777388   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:31:31.786752   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:31:31.795728   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:31:31.795776   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:31:31.805369   78080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:31:31.883678   78080 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:31:31.883751   78080 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:31:32.040989   78080 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:31:32.041127   78080 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:31:32.041259   78080 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:31:32.261525   78080 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:31:28.434784   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:30.435227   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:32.263137   78080 out.go:204]   - Generating certificates and keys ...
	I0729 18:31:32.263242   78080 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:31:32.263349   78080 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:31:32.263461   78080 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:31:32.263554   78080 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:31:32.263640   78080 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:31:32.263724   78080 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:31:32.263801   78080 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:31:32.263872   78080 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:31:32.263993   78080 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:31:32.264109   78080 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:31:32.264164   78080 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:31:32.264255   78080 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:31:32.435248   78080 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:31:32.509478   78080 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:31:32.737003   78080 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:31:33.079523   78080 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:31:33.099871   78080 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:31:33.101450   78080 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:31:33.101520   78080 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:31:33.242577   78080 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:31:31.826678   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:31.845448   77859 api_server.go:72] duration metric: took 4m16.365262679s to wait for apiserver process to appear ...
	I0729 18:31:31.845478   77859 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:31:31.845519   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:31.845568   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:31.889194   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:31.889226   77859 cri.go:89] found id: ""
	I0729 18:31:31.889236   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:31.889290   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.894167   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:31.894271   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:31.936287   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:31.936306   77859 cri.go:89] found id: ""
	I0729 18:31:31.936315   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:31.936367   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.941051   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:31.941110   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:31.978033   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:31.978057   77859 cri.go:89] found id: ""
	I0729 18:31:31.978066   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:31.978115   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.982632   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:31.982704   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:32.023792   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:32.023812   77859 cri.go:89] found id: ""
	I0729 18:31:32.023820   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:32.023875   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.028309   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:32.028367   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:32.071944   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:32.071966   77859 cri.go:89] found id: ""
	I0729 18:31:32.071975   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:32.072033   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.076171   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:32.076252   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:32.111357   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:32.111379   77859 cri.go:89] found id: ""
	I0729 18:31:32.111389   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:32.111446   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.115718   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:32.115775   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:32.168552   77859 cri.go:89] found id: ""
	I0729 18:31:32.168586   77859 logs.go:276] 0 containers: []
	W0729 18:31:32.168597   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:32.168604   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:32.168686   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:32.210002   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:32.210027   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:32.210034   77859 cri.go:89] found id: ""
	I0729 18:31:32.210043   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:32.210090   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.214929   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.220097   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:32.220121   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:32.270343   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:32.270384   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:32.329269   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:32.329303   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:32.388361   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:32.388388   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:32.430072   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:32.430108   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:32.471669   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:32.471701   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:32.508395   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:32.508424   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:32.548968   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:32.549001   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:32.605269   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:32.605306   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:32.642298   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:32.642330   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:32.659407   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:32.659431   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:32.776509   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:32.776544   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:32.832365   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:32.832395   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:35.748109   77627 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.711694865s)
	I0729 18:31:35.748184   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:35.765137   77627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:31:35.775945   77627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:31:35.786206   77627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:31:35.786232   77627 kubeadm.go:157] found existing configuration files:
	
	I0729 18:31:35.786284   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:31:35.797157   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:31:35.797218   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:31:35.810497   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:31:35.821537   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:31:35.821603   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:31:35.832985   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:31:35.842247   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:31:35.842309   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:31:35.852578   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:31:35.861798   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:31:35.861858   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:31:35.872903   77627 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:31:35.926675   77627 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 18:31:35.926872   77627 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:31:36.089002   77627 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:31:36.089179   77627 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:31:36.089310   77627 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:31:36.321844   77627 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:31:33.244436   78080 out.go:204]   - Booting up control plane ...
	I0729 18:31:33.244570   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:31:33.245677   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:31:33.249530   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:31:33.250262   78080 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:31:33.261418   78080 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:31:36.324255   77627 out.go:204]   - Generating certificates and keys ...
	I0729 18:31:36.324352   77627 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:31:36.324435   77627 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:31:36.324539   77627 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:31:36.324619   77627 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:31:36.324707   77627 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:31:36.324780   77627 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:31:36.324864   77627 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:31:36.324945   77627 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:31:36.325036   77627 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:31:36.325175   77627 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:31:36.325340   77627 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:31:36.325425   77627 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:31:36.815491   77627 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:31:36.870914   77627 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:31:36.957705   77627 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:31:37.074845   77627 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:31:37.220920   77627 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:31:37.221651   77627 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:31:37.224384   77627 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:31:32.435653   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:34.933615   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:36.935070   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:35.792366   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:31:35.801160   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 200:
	ok
	I0729 18:31:35.804043   77859 api_server.go:141] control plane version: v1.30.3
	I0729 18:31:35.804063   77859 api_server.go:131] duration metric: took 3.958578435s to wait for apiserver health ...
	I0729 18:31:35.804072   77859 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:31:35.804099   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:35.804140   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:35.845977   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:35.846003   77859 cri.go:89] found id: ""
	I0729 18:31:35.846018   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:35.846072   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.851227   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:35.851302   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:35.892117   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:35.892142   77859 cri.go:89] found id: ""
	I0729 18:31:35.892158   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:35.892215   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.897136   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:35.897216   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:35.941512   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:35.941532   77859 cri.go:89] found id: ""
	I0729 18:31:35.941541   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:35.941598   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.946072   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:35.946124   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:35.984306   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:35.984327   77859 cri.go:89] found id: ""
	I0729 18:31:35.984335   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:35.984381   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.988605   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:35.988671   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:36.031476   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:36.031504   77859 cri.go:89] found id: ""
	I0729 18:31:36.031514   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:36.031567   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.037262   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:36.037319   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:36.078054   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:36.078076   77859 cri.go:89] found id: ""
	I0729 18:31:36.078084   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:36.078134   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.082628   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:36.082693   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:36.122768   77859 cri.go:89] found id: ""
	I0729 18:31:36.122791   77859 logs.go:276] 0 containers: []
	W0729 18:31:36.122799   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:36.122804   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:36.122849   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:36.166611   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:36.166636   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:36.166642   77859 cri.go:89] found id: ""
	I0729 18:31:36.166650   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:36.166712   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.171240   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.175336   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:36.175354   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:36.233224   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:36.233255   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:36.282788   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:36.282820   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:36.675615   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:36.675660   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:36.731559   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:36.731602   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:36.747814   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:36.747845   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:36.786940   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:36.787036   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:36.829659   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:36.829694   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:36.865907   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:36.865939   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:36.908399   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:36.908427   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:37.012220   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:37.012255   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:37.063429   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:37.063463   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:37.107615   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:37.107654   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:39.655973   77859 system_pods.go:59] 8 kube-system pods found
	I0729 18:31:39.656011   77859 system_pods.go:61] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running
	I0729 18:31:39.656019   77859 system_pods.go:61] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running
	I0729 18:31:39.656025   77859 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running
	I0729 18:31:39.656032   77859 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running
	I0729 18:31:39.656037   77859 system_pods.go:61] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running
	I0729 18:31:39.656043   77859 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running
	I0729 18:31:39.656051   77859 system_pods.go:61] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:31:39.656057   77859 system_pods.go:61] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running
	I0729 18:31:39.656068   77859 system_pods.go:74] duration metric: took 3.851988452s to wait for pod list to return data ...
	I0729 18:31:39.656081   77859 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:31:39.658999   77859 default_sa.go:45] found service account: "default"
	I0729 18:31:39.659024   77859 default_sa.go:55] duration metric: took 2.935237ms for default service account to be created ...
	I0729 18:31:39.659034   77859 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:31:39.664926   77859 system_pods.go:86] 8 kube-system pods found
	I0729 18:31:39.664952   77859 system_pods.go:89] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running
	I0729 18:31:39.664959   77859 system_pods.go:89] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running
	I0729 18:31:39.664966   77859 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running
	I0729 18:31:39.664973   77859 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running
	I0729 18:31:39.664979   77859 system_pods.go:89] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running
	I0729 18:31:39.664987   77859 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running
	I0729 18:31:39.665003   77859 system_pods.go:89] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:31:39.665013   77859 system_pods.go:89] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running
	I0729 18:31:39.665025   77859 system_pods.go:126] duration metric: took 5.974722ms to wait for k8s-apps to be running ...
	I0729 18:31:39.665036   77859 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:31:39.665093   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:39.685280   77859 system_svc.go:56] duration metric: took 20.237099ms WaitForService to wait for kubelet
	I0729 18:31:39.685311   77859 kubeadm.go:582] duration metric: took 4m24.205126513s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:31:39.685336   77859 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:31:39.688419   77859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:31:39.688441   77859 node_conditions.go:123] node cpu capacity is 2
	I0729 18:31:39.688455   77859 node_conditions.go:105] duration metric: took 3.111768ms to run NodePressure ...
	I0729 18:31:39.688470   77859 start.go:241] waiting for startup goroutines ...
	I0729 18:31:39.688483   77859 start.go:246] waiting for cluster config update ...
	I0729 18:31:39.688497   77859 start.go:255] writing updated cluster config ...
	I0729 18:31:39.688830   77859 ssh_runner.go:195] Run: rm -f paused
	I0729 18:31:39.739685   77859 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:31:39.741763   77859 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-502055" cluster and "default" namespace by default
	I0729 18:31:37.226046   77627 out.go:204]   - Booting up control plane ...
	I0729 18:31:37.226163   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:31:37.227852   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:31:37.228710   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:31:37.248177   77627 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:31:37.248863   77627 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:31:37.248915   77627 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:31:37.376905   77627 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:31:37.377030   77627 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:31:37.878928   77627 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.066447ms
	I0729 18:31:37.879057   77627 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:31:38.935622   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:41.433736   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:42.880479   77627 kubeadm.go:310] [api-check] The API server is healthy after 5.001345894s
	I0729 18:31:42.892513   77627 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:31:42.910175   77627 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:31:42.948111   77627 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:31:42.948340   77627 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-409322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:31:42.966823   77627 kubeadm.go:310] [bootstrap-token] Using token: f8a98i.3r2is78gllm02lfe
	I0729 18:31:42.968170   77627 out.go:204]   - Configuring RBAC rules ...
	I0729 18:31:42.968304   77627 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:31:42.978257   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:31:42.986458   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:31:42.989744   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:31:42.992484   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:31:42.995162   77627 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:31:43.287739   77627 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:31:43.726370   77627 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:31:44.290225   77627 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:31:44.291166   77627 kubeadm.go:310] 
	I0729 18:31:44.291267   77627 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:31:44.291278   77627 kubeadm.go:310] 
	I0729 18:31:44.291392   77627 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:31:44.291401   77627 kubeadm.go:310] 
	I0729 18:31:44.291436   77627 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:31:44.291530   77627 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:31:44.291589   77627 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:31:44.291606   77627 kubeadm.go:310] 
	I0729 18:31:44.291701   77627 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:31:44.291713   77627 kubeadm.go:310] 
	I0729 18:31:44.291788   77627 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:31:44.291797   77627 kubeadm.go:310] 
	I0729 18:31:44.291860   77627 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:31:44.291954   77627 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:31:44.292052   77627 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:31:44.292070   77627 kubeadm.go:310] 
	I0729 18:31:44.292167   77627 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:31:44.292269   77627 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:31:44.292280   77627 kubeadm.go:310] 
	I0729 18:31:44.292402   77627 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f8a98i.3r2is78gllm02lfe \
	I0729 18:31:44.292543   77627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 18:31:44.292585   77627 kubeadm.go:310] 	--control-plane 
	I0729 18:31:44.292595   77627 kubeadm.go:310] 
	I0729 18:31:44.292710   77627 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:31:44.292732   77627 kubeadm.go:310] 
	I0729 18:31:44.292836   77627 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f8a98i.3r2is78gllm02lfe \
	I0729 18:31:44.293015   77627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 18:31:44.293440   77627 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:31:44.293500   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:31:44.293512   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:31:44.295432   77627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:31:44.296845   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:31:44.308178   77627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:31:44.334403   77627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:31:44.334542   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:44.334562   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-409322 minikube.k8s.io/updated_at=2024_07_29T18_31_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=embed-certs-409322 minikube.k8s.io/primary=true
	I0729 18:31:44.366345   77627 ops.go:34] apiserver oom_adj: -16
	I0729 18:31:44.537970   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:43.433884   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:45.434714   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:45.039020   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:45.538831   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:46.038700   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:46.538761   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.038725   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.538100   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:48.038309   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:48.538896   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:49.039011   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:49.538333   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.435067   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:49.934658   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:50.038548   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:50.538590   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:51.038131   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:51.538253   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.038599   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.538827   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:53.038077   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:53.538860   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:54.038530   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:54.538952   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.433783   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:54.434442   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:56.434864   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:55.038263   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:55.538050   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:56.038006   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:56.538079   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.038042   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.538146   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.696274   77627 kubeadm.go:1113] duration metric: took 13.36179604s to wait for elevateKubeSystemPrivileges
	I0729 18:31:57.696308   77627 kubeadm.go:394] duration metric: took 5m12.066483926s to StartCluster
	I0729 18:31:57.696324   77627 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:31:57.696406   77627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:31:57.698195   77627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:31:57.698479   77627 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:31:57.698592   77627 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:31:57.698674   77627 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-409322"
	I0729 18:31:57.698688   77627 addons.go:69] Setting metrics-server=true in profile "embed-certs-409322"
	I0729 18:31:57.698695   77627 addons.go:69] Setting default-storageclass=true in profile "embed-certs-409322"
	I0729 18:31:57.698714   77627 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-409322"
	I0729 18:31:57.698719   77627 addons.go:234] Setting addon metrics-server=true in "embed-certs-409322"
	W0729 18:31:57.698723   77627 addons.go:243] addon storage-provisioner should already be in state true
	W0729 18:31:57.698729   77627 addons.go:243] addon metrics-server should already be in state true
	I0729 18:31:57.698733   77627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-409322"
	I0729 18:31:57.698755   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.698676   77627 config.go:182] Loaded profile config "embed-certs-409322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:31:57.698760   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.699157   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699169   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699207   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.699170   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699229   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.699209   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.700201   77627 out.go:177] * Verifying Kubernetes components...
	I0729 18:31:57.701577   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:31:57.715130   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44873
	I0729 18:31:57.715156   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I0729 18:31:57.715708   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.715759   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.716320   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.716329   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.716344   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.716345   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.716666   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.716672   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.716868   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.717251   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.717283   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.717715   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41041
	I0729 18:31:57.718172   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.718684   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.718709   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.719111   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.719630   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.719670   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.720815   77627 addons.go:234] Setting addon default-storageclass=true in "embed-certs-409322"
	W0729 18:31:57.720839   77627 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:31:57.720870   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.721233   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.721264   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.733757   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0729 18:31:57.734325   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.735372   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.735397   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.735736   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.735928   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.735939   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0729 18:31:57.736244   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.736923   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.736942   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.737318   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.737664   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.739761   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.740354   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.741103   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
	I0729 18:31:57.741489   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.741979   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.741999   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.742296   77627 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:31:57.742348   77627 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:31:57.742400   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.743411   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.743443   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.743498   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:31:57.743515   77627 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:31:57.743537   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.743682   77627 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:31:57.743697   77627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:31:57.743711   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.748331   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.748743   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.748759   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.748941   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.748986   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.749110   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.749290   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.749423   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.749638   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.749650   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.749671   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.749834   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.749940   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.750051   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.760794   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I0729 18:31:57.761136   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.761574   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.761585   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.761954   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.762133   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.764344   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.764532   77627 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:31:57.764541   77627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:31:57.764555   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.767111   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.767485   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.767498   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.767625   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.767763   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.767875   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.768004   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.965911   77627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:31:57.986557   77627 node_ready.go:35] waiting up to 6m0s for node "embed-certs-409322" to be "Ready" ...
	I0729 18:31:57.995790   77627 node_ready.go:49] node "embed-certs-409322" has status "Ready":"True"
	I0729 18:31:57.995809   77627 node_ready.go:38] duration metric: took 9.222398ms for node "embed-certs-409322" to be "Ready" ...
	I0729 18:31:57.995817   77627 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:58.003516   77627 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace to be "Ready" ...
	I0729 18:31:58.047522   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:31:58.053274   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:31:58.053290   77627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:31:58.074101   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:31:58.074127   77627 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:31:58.088159   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:31:58.097491   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:31:58.097518   77627 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:31:58.125335   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:31:58.628396   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628425   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628466   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628480   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628847   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.628909   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.628918   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.628936   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.628946   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628955   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628914   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.628898   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.629017   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.629046   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.629268   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.629281   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.630616   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.630636   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.630649   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.660029   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.660061   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.660339   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.660358   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.975389   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.975414   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.975721   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.975740   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.975750   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.975760   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.976034   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.976051   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.976063   77627 addons.go:475] Verifying addon metrics-server=true in "embed-certs-409322"
	I0729 18:31:58.978172   77627 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 18:31:58.979568   77627 addons.go:510] duration metric: took 1.280977366s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 18:31:58.935700   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:00.935984   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:00.009825   77627 pod_ready.go:92] pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:00.009846   77627 pod_ready.go:81] duration metric: took 2.006300447s for pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:00.009855   77627 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:02.016463   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:04.515885   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:03.432654   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:05.434708   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:06.517308   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:09.016256   77627 pod_ready.go:92] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.016276   77627 pod_ready.go:81] duration metric: took 9.006414116s for pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.016287   77627 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.021639   77627 pod_ready.go:92] pod "etcd-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.021661   77627 pod_ready.go:81] duration metric: took 5.365088ms for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.021672   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.026599   77627 pod_ready.go:92] pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.026618   77627 pod_ready.go:81] duration metric: took 4.939458ms for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.026629   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.031994   77627 pod_ready.go:92] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.032009   77627 pod_ready.go:81] duration metric: took 5.37307ms for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.032020   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kxf5z" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.036180   77627 pod_ready.go:92] pod "kube-proxy-kxf5z" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.036196   77627 pod_ready.go:81] duration metric: took 4.16934ms for pod "kube-proxy-kxf5z" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.036205   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.414950   77627 pod_ready.go:92] pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.414973   77627 pod_ready.go:81] duration metric: took 378.76116ms for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.414981   77627 pod_ready.go:38] duration metric: took 11.419116871s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:09.414995   77627 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:32:09.415042   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:32:09.434210   77627 api_server.go:72] duration metric: took 11.735691998s to wait for apiserver process to appear ...
	I0729 18:32:09.434240   77627 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:32:09.434260   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:32:09.439755   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I0729 18:32:09.440612   77627 api_server.go:141] control plane version: v1.30.3
	I0729 18:32:09.440631   77627 api_server.go:131] duration metric: took 6.382802ms to wait for apiserver health ...
	I0729 18:32:09.440640   77627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:32:09.617533   77627 system_pods.go:59] 9 kube-system pods found
	I0729 18:32:09.617564   77627 system_pods.go:61] "coredns-7db6d8ff4d-wpnfg" [687cbc8f-370a-4b72-bc1c-6ae36efe890e] Running
	I0729 18:32:09.617569   77627 system_pods.go:61] "coredns-7db6d8ff4d-wztpj" [1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7] Running
	I0729 18:32:09.617572   77627 system_pods.go:61] "etcd-embed-certs-409322" [68de54c3-7d47-4e79-a064-08b013b1d910] Running
	I0729 18:32:09.617575   77627 system_pods.go:61] "kube-apiserver-embed-certs-409322" [dc1a0568-ef7c-493f-91fb-7438456daf6d] Running
	I0729 18:32:09.617579   77627 system_pods.go:61] "kube-controller-manager-embed-certs-409322" [da715e8c-2437-487b-b4e0-c93af2f079f7] Running
	I0729 18:32:09.617582   77627 system_pods.go:61] "kube-proxy-kxf5z" [74ed1812-b3bf-429d-b8f1-bdccb3415fb5] Running
	I0729 18:32:09.617584   77627 system_pods.go:61] "kube-scheduler-embed-certs-409322" [188cf21a-9a8a-45de-9a91-9e593626ce6d] Running
	I0729 18:32:09.617591   77627 system_pods.go:61] "metrics-server-569cc877fc-6q4nl" [57dc61cc-7490-49e5-9d03-c81aa5d25aea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:32:09.617596   77627 system_pods.go:61] "storage-provisioner" [b0b1e31d-9b5c-4e82-aea7-56184832c053] Running
	I0729 18:32:09.617604   77627 system_pods.go:74] duration metric: took 176.958452ms to wait for pod list to return data ...
	I0729 18:32:09.617614   77627 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:32:09.813846   77627 default_sa.go:45] found service account: "default"
	I0729 18:32:09.813871   77627 default_sa.go:55] duration metric: took 196.249412ms for default service account to be created ...
	I0729 18:32:09.813886   77627 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:32:10.019167   77627 system_pods.go:86] 9 kube-system pods found
	I0729 18:32:10.019199   77627 system_pods.go:89] "coredns-7db6d8ff4d-wpnfg" [687cbc8f-370a-4b72-bc1c-6ae36efe890e] Running
	I0729 18:32:10.019208   77627 system_pods.go:89] "coredns-7db6d8ff4d-wztpj" [1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7] Running
	I0729 18:32:10.019214   77627 system_pods.go:89] "etcd-embed-certs-409322" [68de54c3-7d47-4e79-a064-08b013b1d910] Running
	I0729 18:32:10.019220   77627 system_pods.go:89] "kube-apiserver-embed-certs-409322" [dc1a0568-ef7c-493f-91fb-7438456daf6d] Running
	I0729 18:32:10.019227   77627 system_pods.go:89] "kube-controller-manager-embed-certs-409322" [da715e8c-2437-487b-b4e0-c93af2f079f7] Running
	I0729 18:32:10.019233   77627 system_pods.go:89] "kube-proxy-kxf5z" [74ed1812-b3bf-429d-b8f1-bdccb3415fb5] Running
	I0729 18:32:10.019239   77627 system_pods.go:89] "kube-scheduler-embed-certs-409322" [188cf21a-9a8a-45de-9a91-9e593626ce6d] Running
	I0729 18:32:10.019249   77627 system_pods.go:89] "metrics-server-569cc877fc-6q4nl" [57dc61cc-7490-49e5-9d03-c81aa5d25aea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:32:10.019257   77627 system_pods.go:89] "storage-provisioner" [b0b1e31d-9b5c-4e82-aea7-56184832c053] Running
	I0729 18:32:10.019267   77627 system_pods.go:126] duration metric: took 205.375742ms to wait for k8s-apps to be running ...
	I0729 18:32:10.019278   77627 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:32:10.019326   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:32:10.034632   77627 system_svc.go:56] duration metric: took 15.345747ms WaitForService to wait for kubelet
	I0729 18:32:10.034659   77627 kubeadm.go:582] duration metric: took 12.336145267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:32:10.034687   77627 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:32:10.214205   77627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:32:10.214240   77627 node_conditions.go:123] node cpu capacity is 2
	I0729 18:32:10.214255   77627 node_conditions.go:105] duration metric: took 179.559492ms to run NodePressure ...
	I0729 18:32:10.214269   77627 start.go:241] waiting for startup goroutines ...
	I0729 18:32:10.214279   77627 start.go:246] waiting for cluster config update ...
	I0729 18:32:10.214297   77627 start.go:255] writing updated cluster config ...
	I0729 18:32:10.214639   77627 ssh_runner.go:195] Run: rm -f paused
	I0729 18:32:10.264858   77627 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:32:10.266718   77627 out.go:177] * Done! kubectl is now configured to use "embed-certs-409322" cluster and "default" namespace by default
	I0729 18:32:07.934519   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:10.434593   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:13.262907   78080 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:32:13.263487   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:13.263679   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:12.934686   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:13.928481   77394 pod_ready.go:81] duration metric: took 4m0.00080059s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" ...
	E0729 18:32:13.928509   77394 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 18:32:13.928528   77394 pod_ready.go:38] duration metric: took 4m10.042077465s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:13.928554   77394 kubeadm.go:597] duration metric: took 4m18.205651497s to restartPrimaryControlPlane
	W0729 18:32:13.928623   77394 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:32:13.928649   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:32:18.264261   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:18.264554   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:28.265190   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:28.265433   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:40.226240   77394 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.297571665s)
	I0729 18:32:40.226316   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:32:40.243407   77394 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:32:40.254946   77394 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:32:40.264608   77394 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:32:40.264631   77394 kubeadm.go:157] found existing configuration files:
	
	I0729 18:32:40.264675   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:32:40.274180   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:32:40.274231   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:32:40.283752   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:32:40.293163   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:32:40.293232   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:32:40.302533   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:32:40.311972   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:32:40.312024   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:32:40.321513   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:32:40.330546   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:32:40.330599   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:32:40.340190   77394 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:32:40.389517   77394 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 18:32:40.389592   77394 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:32:40.508682   77394 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:32:40.508783   77394 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:32:40.508859   77394 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 18:32:40.517673   77394 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:32:40.520623   77394 out.go:204]   - Generating certificates and keys ...
	I0729 18:32:40.520726   77394 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:32:40.520824   77394 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:32:40.520893   77394 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:32:40.520961   77394 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:32:40.521045   77394 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:32:40.521094   77394 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:32:40.521171   77394 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:32:40.521254   77394 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:32:40.521357   77394 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:32:40.521475   77394 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:32:40.521535   77394 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:32:40.521606   77394 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:32:40.615870   77394 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:32:40.837902   77394 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:32:40.924418   77394 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:32:41.068573   77394 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:32:41.287201   77394 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:32:41.287991   77394 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:32:41.293523   77394 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:32:41.295211   77394 out.go:204]   - Booting up control plane ...
	I0729 18:32:41.295329   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:32:41.295455   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:32:41.295560   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:32:41.317802   77394 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:32:41.324522   77394 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:32:41.324589   77394 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:32:41.463007   77394 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:32:41.463116   77394 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:32:41.982144   77394 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 519.208408ms
	I0729 18:32:41.982263   77394 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:32:46.983564   77394 kubeadm.go:310] [api-check] The API server is healthy after 5.001335599s
	I0729 18:32:46.999811   77394 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:32:47.018194   77394 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:32:47.051359   77394 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:32:47.051564   77394 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-888056 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:32:47.062615   77394 kubeadm.go:310] [bootstrap-token] Using token: a14u5x.5d4oe8yqdl9tiifc
	I0729 18:32:47.064051   77394 out.go:204]   - Configuring RBAC rules ...
	I0729 18:32:47.064187   77394 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:32:47.071856   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:32:47.084985   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:32:47.088622   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:32:47.091797   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:32:47.096194   77394 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:32:47.391394   77394 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:32:47.834314   77394 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:32:48.394665   77394 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:32:48.394689   77394 kubeadm.go:310] 
	I0729 18:32:48.394763   77394 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:32:48.394797   77394 kubeadm.go:310] 
	I0729 18:32:48.394928   77394 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:32:48.394941   77394 kubeadm.go:310] 
	I0729 18:32:48.394979   77394 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:32:48.395058   77394 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:32:48.395126   77394 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:32:48.395141   77394 kubeadm.go:310] 
	I0729 18:32:48.395221   77394 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:32:48.395230   77394 kubeadm.go:310] 
	I0729 18:32:48.395297   77394 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:32:48.395306   77394 kubeadm.go:310] 
	I0729 18:32:48.395374   77394 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:32:48.395467   77394 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:32:48.395554   77394 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:32:48.395563   77394 kubeadm.go:310] 
	I0729 18:32:48.395652   77394 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:32:48.395766   77394 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:32:48.395778   77394 kubeadm.go:310] 
	I0729 18:32:48.395886   77394 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a14u5x.5d4oe8yqdl9tiifc \
	I0729 18:32:48.396030   77394 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 18:32:48.396062   77394 kubeadm.go:310] 	--control-plane 
	I0729 18:32:48.396071   77394 kubeadm.go:310] 
	I0729 18:32:48.396191   77394 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:32:48.396200   77394 kubeadm.go:310] 
	I0729 18:32:48.396276   77394 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a14u5x.5d4oe8yqdl9tiifc \
	I0729 18:32:48.396393   77394 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 18:32:48.397540   77394 kubeadm.go:310] W0729 18:32:40.358164    2949 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 18:32:48.397921   77394 kubeadm.go:310] W0729 18:32:40.359840    2949 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 18:32:48.398071   77394 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:32:48.398090   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:32:48.398099   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:32:48.399641   77394 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:32:48.266531   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:48.266736   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:48.400846   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:32:48.412594   77394 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:32:48.434792   77394 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:32:48.434872   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:48.434907   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-888056 minikube.k8s.io/updated_at=2024_07_29T18_32_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=no-preload-888056 minikube.k8s.io/primary=true
	I0729 18:32:48.672892   77394 ops.go:34] apiserver oom_adj: -16
	I0729 18:32:48.673144   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:49.173811   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:49.673775   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:50.173717   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:50.673774   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:51.174068   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:51.673565   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:52.173431   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:52.673602   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:53.173912   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:53.315565   77394 kubeadm.go:1113] duration metric: took 4.880757535s to wait for elevateKubeSystemPrivileges
	I0729 18:32:53.315609   77394 kubeadm.go:394] duration metric: took 4m57.645527986s to StartCluster
	I0729 18:32:53.315633   77394 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:32:53.315736   77394 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:32:53.317360   77394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:32:53.317579   77394 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:32:53.317669   77394 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:32:53.317784   77394 addons.go:69] Setting storage-provisioner=true in profile "no-preload-888056"
	I0729 18:32:53.317820   77394 addons.go:234] Setting addon storage-provisioner=true in "no-preload-888056"
	I0729 18:32:53.317817   77394 addons.go:69] Setting default-storageclass=true in profile "no-preload-888056"
	W0729 18:32:53.317835   77394 addons.go:243] addon storage-provisioner should already be in state true
	I0729 18:32:53.317840   77394 config.go:182] Loaded profile config "no-preload-888056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:32:53.317836   77394 addons.go:69] Setting metrics-server=true in profile "no-preload-888056"
	I0729 18:32:53.317861   77394 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-888056"
	I0729 18:32:53.317878   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.317882   77394 addons.go:234] Setting addon metrics-server=true in "no-preload-888056"
	W0729 18:32:53.317892   77394 addons.go:243] addon metrics-server should already be in state true
	I0729 18:32:53.317927   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.318302   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318308   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318334   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.318345   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.318301   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318441   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.319022   77394 out.go:177] * Verifying Kubernetes components...
	I0729 18:32:53.320383   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:32:53.335666   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0729 18:32:53.336170   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.336860   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.336896   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.337301   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.338104   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I0729 18:32:53.338137   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40655
	I0729 18:32:53.338545   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.338559   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.338595   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.338614   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.339076   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.339094   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.339163   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.339188   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.339510   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.340089   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.340126   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.340346   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.340557   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.344286   77394 addons.go:234] Setting addon default-storageclass=true in "no-preload-888056"
	W0729 18:32:53.344307   77394 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:32:53.344335   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.344702   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.344727   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.356006   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33765
	I0729 18:32:53.356613   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.357135   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.357159   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.357517   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.357604   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0729 18:32:53.357752   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.358011   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.358472   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.358490   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.358898   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.359110   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.359546   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.360493   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.361662   77394 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:32:53.362464   77394 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:32:53.363294   77394 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:32:53.363311   77394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:32:53.363331   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.364170   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:32:53.364182   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I0729 18:32:53.364186   77394 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:32:53.364205   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.364560   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.365040   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.365061   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.365515   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.365963   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.365983   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.367883   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.368768   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369264   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.369284   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369576   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.369591   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369858   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.369964   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.370009   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.370102   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.370169   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.370198   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.370317   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.370344   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.382571   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0729 18:32:53.382940   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.383311   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.383336   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.383748   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.383946   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.385570   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.385761   77394 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:32:53.385775   77394 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:32:53.385792   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.388411   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.388756   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.388774   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.389017   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.389193   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.389350   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.389463   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.585542   77394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:32:53.645556   77394 node_ready.go:35] waiting up to 6m0s for node "no-preload-888056" to be "Ready" ...
	I0729 18:32:53.657965   77394 node_ready.go:49] node "no-preload-888056" has status "Ready":"True"
	I0729 18:32:53.657997   77394 node_ready.go:38] duration metric: took 12.408834ms for node "no-preload-888056" to be "Ready" ...
	I0729 18:32:53.658010   77394 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:53.673068   77394 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:53.724224   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:32:53.724248   77394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:32:53.763536   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:32:53.774123   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:32:53.812615   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:32:53.812639   77394 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:32:53.945274   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:32:53.945303   77394 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:32:54.107180   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:32:54.184354   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.184379   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.184699   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.184748   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.184762   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.184776   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.184786   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.185015   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.185043   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.185077   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.244759   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.244781   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.245108   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.245156   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.245169   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.782604   77394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.008443119s)
	I0729 18:32:54.782663   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.782676   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.782990   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.783010   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.783020   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.783028   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.783265   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.783283   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946051   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.946074   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.946396   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.946418   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946430   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.946439   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.946680   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.946698   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946710   77394 addons.go:475] Verifying addon metrics-server=true in "no-preload-888056"
	I0729 18:32:54.948362   77394 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 18:32:54.949821   77394 addons.go:510] duration metric: took 1.632153415s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 18:32:55.679655   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:57.680175   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:33:00.179877   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:33:01.180068   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.180094   77394 pod_ready.go:81] duration metric: took 7.506992362s for pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.180106   77394 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.185742   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.185760   77394 pod_ready.go:81] duration metric: took 5.647157ms for pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.185769   77394 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.190056   77394 pod_ready.go:92] pod "etcd-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.190077   77394 pod_ready.go:81] duration metric: took 4.30181ms for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.190085   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.194255   77394 pod_ready.go:92] pod "kube-apiserver-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.194273   77394 pod_ready.go:81] duration metric: took 4.182006ms for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.194284   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.199056   77394 pod_ready.go:92] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.199072   77394 pod_ready.go:81] duration metric: took 4.779158ms for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.199081   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-94ff9" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.578279   77394 pod_ready.go:92] pod "kube-proxy-94ff9" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.578299   77394 pod_ready.go:81] duration metric: took 379.211109ms for pod "kube-proxy-94ff9" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.578308   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:02.378184   77394 pod_ready.go:92] pod "kube-scheduler-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:02.378205   77394 pod_ready.go:81] duration metric: took 799.890202ms for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:02.378212   77394 pod_ready.go:38] duration metric: took 8.720189182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:33:02.378226   77394 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:33:02.378282   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:33:02.396023   77394 api_server.go:72] duration metric: took 9.07841179s to wait for apiserver process to appear ...
	I0729 18:33:02.396050   77394 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:33:02.396070   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:33:02.403736   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 200:
	ok
	I0729 18:33:02.404828   77394 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 18:33:02.404850   77394 api_server.go:131] duration metric: took 8.793481ms to wait for apiserver health ...
	I0729 18:33:02.404858   77394 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:33:02.580656   77394 system_pods.go:59] 9 kube-system pods found
	I0729 18:33:02.580683   77394 system_pods.go:61] "coredns-5cfdc65f69-bbh6c" [66b43af3-78eb-437f-81d7-eedb4cc34349] Running
	I0729 18:33:02.580687   77394 system_pods.go:61] "coredns-5cfdc65f69-j9ddw" [679f8750-86aa-4e00-8291-6996b54b1930] Running
	I0729 18:33:02.580691   77394 system_pods.go:61] "etcd-no-preload-888056" [abcd648d-659a-4f02-a769-f2222eaac945] Running
	I0729 18:33:02.580695   77394 system_pods.go:61] "kube-apiserver-no-preload-888056" [99a48803-06b1-44a6-a0cc-f28f2ba7235f] Running
	I0729 18:33:02.580699   77394 system_pods.go:61] "kube-controller-manager-no-preload-888056" [6bb3d64c-9fef-41ee-a68d-170fac01dec5] Running
	I0729 18:33:02.580702   77394 system_pods.go:61] "kube-proxy-94ff9" [dd06899e-3d54-4b71-bda6-f8c6d06ce100] Running
	I0729 18:33:02.580704   77394 system_pods.go:61] "kube-scheduler-no-preload-888056" [a1b60226-df5e-45ce-8382-a8d277278129] Running
	I0729 18:33:02.580710   77394 system_pods.go:61] "metrics-server-78fcd8795b-9qqmj" [45bbbaf3-cf3e-4db1-9eec-693425bc5dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:33:02.580714   77394 system_pods.go:61] "storage-provisioner" [0aacb67c-abea-47fb-a2f1-f1245e68599a] Running
	I0729 18:33:02.580721   77394 system_pods.go:74] duration metric: took 175.857868ms to wait for pod list to return data ...
	I0729 18:33:02.580728   77394 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:33:02.778962   77394 default_sa.go:45] found service account: "default"
	I0729 18:33:02.778987   77394 default_sa.go:55] duration metric: took 198.250326ms for default service account to be created ...
	I0729 18:33:02.778995   77394 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:33:02.981123   77394 system_pods.go:86] 9 kube-system pods found
	I0729 18:33:02.981159   77394 system_pods.go:89] "coredns-5cfdc65f69-bbh6c" [66b43af3-78eb-437f-81d7-eedb4cc34349] Running
	I0729 18:33:02.981166   77394 system_pods.go:89] "coredns-5cfdc65f69-j9ddw" [679f8750-86aa-4e00-8291-6996b54b1930] Running
	I0729 18:33:02.981175   77394 system_pods.go:89] "etcd-no-preload-888056" [abcd648d-659a-4f02-a769-f2222eaac945] Running
	I0729 18:33:02.981181   77394 system_pods.go:89] "kube-apiserver-no-preload-888056" [99a48803-06b1-44a6-a0cc-f28f2ba7235f] Running
	I0729 18:33:02.981186   77394 system_pods.go:89] "kube-controller-manager-no-preload-888056" [6bb3d64c-9fef-41ee-a68d-170fac01dec5] Running
	I0729 18:33:02.981190   77394 system_pods.go:89] "kube-proxy-94ff9" [dd06899e-3d54-4b71-bda6-f8c6d06ce100] Running
	I0729 18:33:02.981196   77394 system_pods.go:89] "kube-scheduler-no-preload-888056" [a1b60226-df5e-45ce-8382-a8d277278129] Running
	I0729 18:33:02.981206   77394 system_pods.go:89] "metrics-server-78fcd8795b-9qqmj" [45bbbaf3-cf3e-4db1-9eec-693425bc5dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:33:02.981214   77394 system_pods.go:89] "storage-provisioner" [0aacb67c-abea-47fb-a2f1-f1245e68599a] Running
	I0729 18:33:02.981228   77394 system_pods.go:126] duration metric: took 202.226569ms to wait for k8s-apps to be running ...
	I0729 18:33:02.981239   77394 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:33:02.981290   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:33:02.999134   77394 system_svc.go:56] duration metric: took 17.878004ms WaitForService to wait for kubelet
	I0729 18:33:02.999169   77394 kubeadm.go:582] duration metric: took 9.681562891s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:33:02.999187   77394 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:33:03.179246   77394 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:33:03.179274   77394 node_conditions.go:123] node cpu capacity is 2
	I0729 18:33:03.179286   77394 node_conditions.go:105] duration metric: took 180.093491ms to run NodePressure ...
	I0729 18:33:03.179312   77394 start.go:241] waiting for startup goroutines ...
	I0729 18:33:03.179322   77394 start.go:246] waiting for cluster config update ...
	I0729 18:33:03.179344   77394 start.go:255] writing updated cluster config ...
	I0729 18:33:03.179658   77394 ssh_runner.go:195] Run: rm -f paused
	I0729 18:33:03.228664   77394 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 18:33:03.230706   77394 out.go:177] * Done! kubectl is now configured to use "no-preload-888056" cluster and "default" namespace by default
	I0729 18:33:28.269122   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:33:28.269375   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:33:28.269399   78080 kubeadm.go:310] 
	I0729 18:33:28.269433   78080 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:33:28.269471   78080 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:33:28.269480   78080 kubeadm.go:310] 
	I0729 18:33:28.269508   78080 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:33:28.269541   78080 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:33:28.269686   78080 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:33:28.269698   78080 kubeadm.go:310] 
	I0729 18:33:28.269846   78080 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:33:28.269902   78080 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:33:28.269946   78080 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:33:28.269969   78080 kubeadm.go:310] 
	I0729 18:33:28.270132   78080 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:33:28.270246   78080 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:33:28.270258   78080 kubeadm.go:310] 
	I0729 18:33:28.270434   78080 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:33:28.270567   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:33:28.270674   78080 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:33:28.270774   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:33:28.270784   78080 kubeadm.go:310] 
	I0729 18:33:28.271347   78080 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:33:28.271428   78080 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:33:28.271503   78080 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 18:33:28.271650   78080 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 18:33:28.271713   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:33:28.743675   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:33:28.759228   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:33:28.768522   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:33:28.768546   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:33:28.768593   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:33:28.777423   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:33:28.777481   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:33:28.786450   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:33:28.795335   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:33:28.795386   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:33:28.804519   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:33:28.813137   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:33:28.813193   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:33:28.822053   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:33:28.830463   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:33:28.830513   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:33:28.839818   78080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:33:29.066010   78080 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:35:25.197434   78080 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:35:25.197566   78080 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 18:35:25.199476   78080 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:35:25.199554   78080 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:35:25.199667   78080 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:35:25.199800   78080 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:35:25.199937   78080 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:35:25.200054   78080 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:35:25.201801   78080 out.go:204]   - Generating certificates and keys ...
	I0729 18:35:25.201875   78080 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:35:25.201944   78080 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:35:25.202073   78080 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:35:25.202136   78080 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:35:25.202231   78080 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:35:25.202287   78080 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:35:25.202339   78080 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:35:25.202426   78080 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:35:25.202492   78080 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:35:25.202560   78080 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:35:25.202603   78080 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:35:25.202692   78080 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:35:25.202779   78080 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:35:25.202863   78080 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:35:25.202962   78080 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:35:25.203070   78080 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:35:25.203213   78080 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:35:25.203289   78080 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:35:25.203323   78080 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:35:25.203381   78080 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:35:25.204837   78080 out.go:204]   - Booting up control plane ...
	I0729 18:35:25.204920   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:35:25.204985   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:35:25.205053   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:35:25.205146   78080 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:35:25.205274   78080 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:35:25.205316   78080 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:35:25.205379   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.205591   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.205658   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.205828   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.205926   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206142   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206204   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206411   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206488   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206683   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206698   78080 kubeadm.go:310] 
	I0729 18:35:25.206755   78080 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:35:25.206817   78080 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:35:25.206827   78080 kubeadm.go:310] 
	I0729 18:35:25.206860   78080 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:35:25.206890   78080 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:35:25.206975   78080 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:35:25.206985   78080 kubeadm.go:310] 
	I0729 18:35:25.207099   78080 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:35:25.207134   78080 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:35:25.207167   78080 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:35:25.207177   78080 kubeadm.go:310] 
	I0729 18:35:25.207289   78080 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:35:25.207403   78080 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:35:25.207412   78080 kubeadm.go:310] 
	I0729 18:35:25.207532   78080 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:35:25.207640   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:35:25.207754   78080 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:35:25.207821   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:35:25.207854   78080 kubeadm.go:310] 
	I0729 18:35:25.207886   78080 kubeadm.go:394] duration metric: took 7m57.080498205s to StartCluster
	I0729 18:35:25.207923   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:35:25.207983   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:35:25.251803   78080 cri.go:89] found id: ""
	I0729 18:35:25.251841   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.251852   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:35:25.251859   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:35:25.251920   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:35:25.287842   78080 cri.go:89] found id: ""
	I0729 18:35:25.287877   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.287895   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:35:25.287903   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:35:25.287967   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:35:25.324546   78080 cri.go:89] found id: ""
	I0729 18:35:25.324573   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.324582   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:35:25.324588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:35:25.324634   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:35:25.375723   78080 cri.go:89] found id: ""
	I0729 18:35:25.375746   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.375753   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:35:25.375759   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:35:25.375812   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:35:25.412580   78080 cri.go:89] found id: ""
	I0729 18:35:25.412604   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.412612   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:35:25.412617   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:35:25.412664   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:35:25.449360   78080 cri.go:89] found id: ""
	I0729 18:35:25.449397   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.449406   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:35:25.449413   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:35:25.449464   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:35:25.485655   78080 cri.go:89] found id: ""
	I0729 18:35:25.485687   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.485698   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:35:25.485705   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:35:25.485769   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:35:25.521752   78080 cri.go:89] found id: ""
	I0729 18:35:25.521776   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.521783   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:35:25.521792   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:35:25.521808   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:35:25.562894   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:35:25.562922   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:35:25.623879   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:35:25.623912   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:35:25.647315   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:35:25.647341   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:35:25.744827   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:35:25.744850   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:35:25.744865   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0729 18:35:25.849394   78080 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 18:35:25.849445   78080 out.go:239] * 
	W0729 18:35:25.849520   78080 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:35:25.849558   78080 out.go:239] * 
	W0729 18:35:25.850438   78080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:35:25.853770   78080 out.go:177] 
	W0729 18:35:25.854982   78080 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:35:25.855035   78080 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 18:35:25.855060   78080 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 18:35:25.856444   78080 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.733366148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278441733286198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1479365f-f71f-49e0-ab55-a42dbe5eb555 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.733849326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75991233-ee71-4f60-9551-e34c594f67b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.733960103Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75991233-ee71-4f60-9551-e34c594f67b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.734163003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481,PodSandboxId:2cf16ca38be5f93e11353858e2145e82ad7f347fb110214f32b29b49abec9064,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277664606333649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2264d30-60dc-41f9-9b84-3b073031cf1b,},Annotations:map[string]string{io.kubernetes.container.hash: 50277011,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703935071efbcce9eda793a20cfa2eb88e4bf49ea400ba3282cfc8eb25fa4881,PodSandboxId:c66545b1536588610a9c3b00d6383eb0caf5ad3456d5ff739bb3fc889713c494,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722277642620412552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16c2fbe7-3235-4c35-b89d-d36c39f5e8e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6562b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b,PodSandboxId:de6ef185d4530830585e64f93c85fac72c9f067ef410fa2d1164d0d28291b083,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277641405583173,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mk6mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e005b1f9-cc7a-45aa-915e-85a461ebc814,},Annotations:map[string]string{io.kubernetes.container.hash: ecd1b366,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9,PodSandboxId:afc17abc1c9144526ad042ee737b7273b4316a55f78e4cccbae1fd4f5bcb0937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277633728301138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgdm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57a99bb3-9e
63-47dd-a958-5be7f3c0a9c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7d6ad1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b,PodSandboxId:2cf16ca38be5f93e11353858e2145e82ad7f347fb110214f32b29b49abec9064,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722277633652711442,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2264d30-60dc-41f9-9b84-
3b073031cf1b,},Annotations:map[string]string{io.kubernetes.container.hash: 50277011,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a,PodSandboxId:68cd61bf2e8391ef03eca0645fde97f05f40b36a85956efa3b289d50e213b255,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277628915439843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7fe6a09729d5e8dbafc14f0bd53ac8,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7fe8cd16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4,PodSandboxId:c1b20eb7e651a8283f9648b84f5c31e2dbf00a6ad1dc88868562196ea70f49c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277628904473069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54c6f6d9e5e180f29a0eb48ba166ce41,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 531499dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc,PodSandboxId:82547cb3c923e9c5881111ce82af9322ad9355c946b36cea567f286568a5996f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277628855580748,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6946d6da2d8baa3ff93ee0849b60
c03,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd,PodSandboxId:286ad36c4b4b8168d515c0001413b5e02c2f8919968e7f7dfa27c328b742da38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277628813397299,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81f633ff1c79be9b30276a7840ab3b
3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75991233-ee71-4f60-9551-e34c594f67b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.771150229Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c18c207-a9d6-44b1-9add-34b62ba890ea name=/runtime.v1.RuntimeService/Version
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.771237601Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c18c207-a9d6-44b1-9add-34b62ba890ea name=/runtime.v1.RuntimeService/Version
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.772369829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70f553c4-bd62-4fb9-90f2-4f7ae913a7ea name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.772750638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278441772729816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70f553c4-bd62-4fb9-90f2-4f7ae913a7ea name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.773512437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc985729-e711-4094-a423-e29925cb0aa7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.773589824Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc985729-e711-4094-a423-e29925cb0aa7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.773783868Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481,PodSandboxId:2cf16ca38be5f93e11353858e2145e82ad7f347fb110214f32b29b49abec9064,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277664606333649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2264d30-60dc-41f9-9b84-3b073031cf1b,},Annotations:map[string]string{io.kubernetes.container.hash: 50277011,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703935071efbcce9eda793a20cfa2eb88e4bf49ea400ba3282cfc8eb25fa4881,PodSandboxId:c66545b1536588610a9c3b00d6383eb0caf5ad3456d5ff739bb3fc889713c494,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722277642620412552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16c2fbe7-3235-4c35-b89d-d36c39f5e8e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6562b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b,PodSandboxId:de6ef185d4530830585e64f93c85fac72c9f067ef410fa2d1164d0d28291b083,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277641405583173,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mk6mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e005b1f9-cc7a-45aa-915e-85a461ebc814,},Annotations:map[string]string{io.kubernetes.container.hash: ecd1b366,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9,PodSandboxId:afc17abc1c9144526ad042ee737b7273b4316a55f78e4cccbae1fd4f5bcb0937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277633728301138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgdm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57a99bb3-9e
63-47dd-a958-5be7f3c0a9c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7d6ad1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b,PodSandboxId:2cf16ca38be5f93e11353858e2145e82ad7f347fb110214f32b29b49abec9064,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722277633652711442,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2264d30-60dc-41f9-9b84-
3b073031cf1b,},Annotations:map[string]string{io.kubernetes.container.hash: 50277011,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a,PodSandboxId:68cd61bf2e8391ef03eca0645fde97f05f40b36a85956efa3b289d50e213b255,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277628915439843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7fe6a09729d5e8dbafc14f0bd53ac8,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7fe8cd16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4,PodSandboxId:c1b20eb7e651a8283f9648b84f5c31e2dbf00a6ad1dc88868562196ea70f49c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277628904473069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54c6f6d9e5e180f29a0eb48ba166ce41,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 531499dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc,PodSandboxId:82547cb3c923e9c5881111ce82af9322ad9355c946b36cea567f286568a5996f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277628855580748,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6946d6da2d8baa3ff93ee0849b60
c03,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd,PodSandboxId:286ad36c4b4b8168d515c0001413b5e02c2f8919968e7f7dfa27c328b742da38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277628813397299,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81f633ff1c79be9b30276a7840ab3b
3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc985729-e711-4094-a423-e29925cb0aa7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.812607373Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa5203bc-3169-4eb9-a2f4-71a716e1c8aa name=/runtime.v1.RuntimeService/Version
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.812677339Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa5203bc-3169-4eb9-a2f4-71a716e1c8aa name=/runtime.v1.RuntimeService/Version
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.813717654Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0164803-5f83-45bc-898f-305c02369989 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.814329888Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278441814305782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0164803-5f83-45bc-898f-305c02369989 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.814782382Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3b05579-41d7-400a-86e3-afa437e42330 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.814833870Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3b05579-41d7-400a-86e3-afa437e42330 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.815085387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481,PodSandboxId:2cf16ca38be5f93e11353858e2145e82ad7f347fb110214f32b29b49abec9064,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277664606333649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2264d30-60dc-41f9-9b84-3b073031cf1b,},Annotations:map[string]string{io.kubernetes.container.hash: 50277011,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703935071efbcce9eda793a20cfa2eb88e4bf49ea400ba3282cfc8eb25fa4881,PodSandboxId:c66545b1536588610a9c3b00d6383eb0caf5ad3456d5ff739bb3fc889713c494,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722277642620412552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16c2fbe7-3235-4c35-b89d-d36c39f5e8e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6562b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b,PodSandboxId:de6ef185d4530830585e64f93c85fac72c9f067ef410fa2d1164d0d28291b083,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277641405583173,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mk6mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e005b1f9-cc7a-45aa-915e-85a461ebc814,},Annotations:map[string]string{io.kubernetes.container.hash: ecd1b366,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9,PodSandboxId:afc17abc1c9144526ad042ee737b7273b4316a55f78e4cccbae1fd4f5bcb0937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277633728301138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgdm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57a99bb3-9e
63-47dd-a958-5be7f3c0a9c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7d6ad1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b,PodSandboxId:2cf16ca38be5f93e11353858e2145e82ad7f347fb110214f32b29b49abec9064,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722277633652711442,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2264d30-60dc-41f9-9b84-
3b073031cf1b,},Annotations:map[string]string{io.kubernetes.container.hash: 50277011,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a,PodSandboxId:68cd61bf2e8391ef03eca0645fde97f05f40b36a85956efa3b289d50e213b255,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277628915439843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7fe6a09729d5e8dbafc14f0bd53ac8,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7fe8cd16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4,PodSandboxId:c1b20eb7e651a8283f9648b84f5c31e2dbf00a6ad1dc88868562196ea70f49c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277628904473069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54c6f6d9e5e180f29a0eb48ba166ce41,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 531499dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc,PodSandboxId:82547cb3c923e9c5881111ce82af9322ad9355c946b36cea567f286568a5996f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277628855580748,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6946d6da2d8baa3ff93ee0849b60
c03,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd,PodSandboxId:286ad36c4b4b8168d515c0001413b5e02c2f8919968e7f7dfa27c328b742da38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277628813397299,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81f633ff1c79be9b30276a7840ab3b
3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3b05579-41d7-400a-86e3-afa437e42330 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.852433839Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1c1f39a-8ff7-479f-a512-3803d84c76e9 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.852504931Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1c1f39a-8ff7-479f-a512-3803d84c76e9 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.853396959Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0bda70a5-8d2b-46c1-9f5b-2fec22db1a71 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.853858877Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278441853835084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0bda70a5-8d2b-46c1-9f5b-2fec22db1a71 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.854440160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd7bdd5b-b82f-4b49-a86c-d38c675f1702 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.854517096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd7bdd5b-b82f-4b49-a86c-d38c675f1702 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:40:41 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:40:41.854899906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481,PodSandboxId:2cf16ca38be5f93e11353858e2145e82ad7f347fb110214f32b29b49abec9064,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277664606333649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2264d30-60dc-41f9-9b84-3b073031cf1b,},Annotations:map[string]string{io.kubernetes.container.hash: 50277011,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703935071efbcce9eda793a20cfa2eb88e4bf49ea400ba3282cfc8eb25fa4881,PodSandboxId:c66545b1536588610a9c3b00d6383eb0caf5ad3456d5ff739bb3fc889713c494,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722277642620412552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16c2fbe7-3235-4c35-b89d-d36c39f5e8e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6562b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b,PodSandboxId:de6ef185d4530830585e64f93c85fac72c9f067ef410fa2d1164d0d28291b083,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277641405583173,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mk6mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e005b1f9-cc7a-45aa-915e-85a461ebc814,},Annotations:map[string]string{io.kubernetes.container.hash: ecd1b366,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9,PodSandboxId:afc17abc1c9144526ad042ee737b7273b4316a55f78e4cccbae1fd4f5bcb0937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277633728301138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgdm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57a99bb3-9e
63-47dd-a958-5be7f3c0a9c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7d6ad1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b,PodSandboxId:2cf16ca38be5f93e11353858e2145e82ad7f347fb110214f32b29b49abec9064,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722277633652711442,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2264d30-60dc-41f9-9b84-
3b073031cf1b,},Annotations:map[string]string{io.kubernetes.container.hash: 50277011,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a,PodSandboxId:68cd61bf2e8391ef03eca0645fde97f05f40b36a85956efa3b289d50e213b255,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277628915439843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7fe6a09729d5e8dbafc14f0bd53ac8,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7fe8cd16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4,PodSandboxId:c1b20eb7e651a8283f9648b84f5c31e2dbf00a6ad1dc88868562196ea70f49c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277628904473069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54c6f6d9e5e180f29a0eb48ba166ce41,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 531499dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc,PodSandboxId:82547cb3c923e9c5881111ce82af9322ad9355c946b36cea567f286568a5996f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277628855580748,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6946d6da2d8baa3ff93ee0849b60
c03,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd,PodSandboxId:286ad36c4b4b8168d515c0001413b5e02c2f8919968e7f7dfa27c328b742da38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277628813397299,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81f633ff1c79be9b30276a7840ab3b
3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd7bdd5b-b82f-4b49-a86c-d38c675f1702 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9d54b3da125ce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   2cf16ca38be5f       storage-provisioner
	703935071efbc       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   c66545b153658       busybox
	2b2cc4240a68e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   de6ef185d4530       coredns-7db6d8ff4d-mk6mx
	ec56fb749b981       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   afc17abc1c914       kube-proxy-cgdm8
	482ca3200e17e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   2cf16ca38be5f       storage-provisioner
	fec93784adcb5       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   68cd61bf2e839       etcd-default-k8s-diff-port-502055
	630d0a93e04a3       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   c1b20eb7e651a       kube-apiserver-default-k8s-diff-port-502055
	92b99f54da092       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   82547cb3c923e       kube-controller-manager-default-k8s-diff-port-502055
	991e6d9556b66       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   286ad36c4b4b8       kube-scheduler-default-k8s-diff-port-502055
	
	
	==> coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54419 - 25158 "HINFO IN 4898418047693939155.180201105836920316. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.044292202s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-502055
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-502055
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=default-k8s-diff-port-502055
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T18_19_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:19:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-502055
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:40:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:37:56 +0000   Mon, 29 Jul 2024 18:18:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:37:56 +0000   Mon, 29 Jul 2024 18:18:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:37:56 +0000   Mon, 29 Jul 2024 18:18:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:37:56 +0000   Mon, 29 Jul 2024 18:27:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.244
	  Hostname:    default-k8s-diff-port-502055
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4343992a877646ba8ffa1e0f4210b5e8
	  System UUID:                4343992a-8776-46ba-8ffa-1e0f4210b5e8
	  Boot ID:                    89fe04ef-dd54-4da9-b9fc-d86630fc2277
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-mk6mx                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-502055                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-502055             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-502055    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-cgdm8                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-502055             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-569cc877fc-bm8tm                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-502055 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-502055 event: Registered Node default-k8s-diff-port-502055 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-502055 event: Registered Node default-k8s-diff-port-502055 in Controller
	
	
	==> dmesg <==
	[Jul29 18:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051858] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042959] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.978612] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.573898] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.576945] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul29 18:27] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.058469] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077397] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.208652] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.157971] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.289097] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.460508] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.070122] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.435342] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +5.632464] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.938032] systemd-fstab-generator[1549]: Ignoring "noauto" option for root device
	[  +3.740627] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.860335] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] <==
	{"level":"info","ts":"2024-07-29T18:27:11.136187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"281f28160100a5ee received MsgVoteResp from 281f28160100a5ee at term 3"}
	{"level":"info","ts":"2024-07-29T18:27:11.136214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"281f28160100a5ee became leader at term 3"}
	{"level":"info","ts":"2024-07-29T18:27:11.136243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 281f28160100a5ee elected leader 281f28160100a5ee at term 3"}
	{"level":"info","ts":"2024-07-29T18:27:11.15068Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"281f28160100a5ee","local-member-attributes":"{Name:default-k8s-diff-port-502055 ClientURLs:[https://192.168.61.244:2379]}","request-path":"/0/members/281f28160100a5ee/attributes","cluster-id":"bb4b66289e9d5077","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:27:11.150782Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:27:11.150938Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T18:27:11.150977Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T18:27:11.151064Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:27:11.153172Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T18:27:11.153183Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.244:2379"}
	{"level":"info","ts":"2024-07-29T18:27:27.176245Z","caller":"traceutil/trace.go:171","msg":"trace[1788071170] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"143.287049ms","start":"2024-07-29T18:27:27.032934Z","end":"2024-07-29T18:27:27.176221Z","steps":["trace[1788071170] 'process raft request'  (duration: 143.178768ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:27:27.478265Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.37196ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11956653488811645318 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-502055\" mod_revision:633 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-502055\" value_size:6669 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-502055\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T18:27:27.479153Z","caller":"traceutil/trace.go:171","msg":"trace[2115950415] transaction","detail":"{read_only:false; response_revision:634; number_of_response:1; }","duration":"284.905872ms","start":"2024-07-29T18:27:27.194227Z","end":"2024-07-29T18:27:27.479133Z","steps":["trace[2115950415] 'process raft request'  (duration: 126.238092ms)","trace[2115950415] 'compare'  (duration: 157.271443ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T18:27:27.620341Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.920766ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11956653488811645319 > lease_revoke:<id:25ee90ffb78ee945>","response":"size:28"}
	{"level":"info","ts":"2024-07-29T18:27:44.61149Z","caller":"traceutil/trace.go:171","msg":"trace[353620629] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"391.789488ms","start":"2024-07-29T18:27:44.219685Z","end":"2024-07-29T18:27:44.611474Z","steps":["trace[353620629] 'process raft request'  (duration: 391.611367ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:27:44.611624Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:27:44.219673Z","time spent":"391.882765ms","remote":"127.0.0.1:36266","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":833,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-569cc877fc-bm8tm.17e6c2638abd5b65\" mod_revision:605 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-bm8tm.17e6c2638abd5b65\" value_size:738 lease:2733281451956869286 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-bm8tm.17e6c2638abd5b65\" > >"}
	{"level":"info","ts":"2024-07-29T18:27:44.611861Z","caller":"traceutil/trace.go:171","msg":"trace[1350280374] linearizableReadLoop","detail":"{readStateIndex:694; appliedIndex:694; }","duration":"391.868314ms","start":"2024-07-29T18:27:44.21998Z","end":"2024-07-29T18:27:44.611849Z","steps":["trace[1350280374] 'read index received'  (duration: 391.86542ms)","trace[1350280374] 'applied index is now lower than readState.Index'  (duration: 2.193µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T18:27:44.612053Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"392.061887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-502055\" ","response":"range_response_count:1 size:5802"}
	{"level":"info","ts":"2024-07-29T18:27:44.612093Z","caller":"traceutil/trace.go:171","msg":"trace[785010141] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-502055; range_end:; response_count:1; response_revision:646; }","duration":"392.11047ms","start":"2024-07-29T18:27:44.219976Z","end":"2024-07-29T18:27:44.612087Z","steps":["trace[785010141] 'agreement among raft nodes before linearized reading'  (duration: 391.97531ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:27:44.612114Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:27:44.219951Z","time spent":"392.157883ms","remote":"127.0.0.1:36366","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5825,"request content":"key:\"/registry/minions/default-k8s-diff-port-502055\" "}
	{"level":"info","ts":"2024-07-29T18:27:44.617191Z","caller":"traceutil/trace.go:171","msg":"trace[1394523806] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"394.592986ms","start":"2024-07-29T18:27:44.222575Z","end":"2024-07-29T18:27:44.617168Z","steps":["trace[1394523806] 'process raft request'  (duration: 394.510873ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:27:44.617356Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:27:44.22256Z","time spent":"394.732321ms","remote":"127.0.0.1:36382","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4278,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-569cc877fc-bm8tm\" mod_revision:636 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-bm8tm\" value_size:4212 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-bm8tm\" > >"}
	{"level":"info","ts":"2024-07-29T18:37:11.19113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":877}
	{"level":"info","ts":"2024-07-29T18:37:11.20747Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":877,"took":"15.701049ms","hash":1876331718,"current-db-size-bytes":2637824,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2637824,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-29T18:37:11.207566Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1876331718,"revision":877,"compact-revision":-1}
	
	
	==> kernel <==
	 18:40:42 up 13 min,  0 users,  load average: 0.01, 0.12, 0.13
	Linux default-k8s-diff-port-502055 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] <==
	I0729 18:35:13.515963       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:37:12.518013       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:37:12.518321       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 18:37:13.519005       1 handler_proxy.go:93] no RequestInfo found in the context
	W0729 18:37:13.519009       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:37:13.519154       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	E0729 18:37:13.519262       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 18:37:13.519264       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 18:37:13.521238       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:38:13.519973       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:38:13.520047       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 18:38:13.520065       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:38:13.522195       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:38:13.522331       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 18:38:13.522383       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:40:13.520980       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:40:13.521257       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 18:40:13.521286       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:40:13.522492       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:40:13.522604       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 18:40:13.522631       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] <==
	I0729 18:34:55.798216       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:35:25.335156       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:35:25.807574       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:35:55.339653       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:35:55.815491       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:36:25.344651       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:36:25.823305       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:36:55.349573       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:36:55.831639       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:37:25.355649       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:37:25.840509       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:37:55.361776       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:37:55.850008       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 18:38:16.230347       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="362.623µs"
	E0729 18:38:25.367277       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:38:25.857956       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 18:38:27.227845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="280.362µs"
	E0729 18:38:55.372081       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:38:55.865544       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:39:25.377152       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:39:25.874700       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:39:55.381698       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:39:55.882956       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:40:25.387007       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:40:25.890571       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] <==
	I0729 18:27:13.953921       1 server_linux.go:69] "Using iptables proxy"
	I0729 18:27:13.978992       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.244"]
	I0729 18:27:14.035464       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:27:14.035653       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:27:14.035697       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:27:14.038507       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:27:14.038725       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:27:14.038930       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:27:14.040216       1 config.go:192] "Starting service config controller"
	I0729 18:27:14.040263       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:27:14.040300       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:27:14.040317       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:27:14.040785       1 config.go:319] "Starting node config controller"
	I0729 18:27:14.040938       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:27:14.140815       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:27:14.140990       1 shared_informer.go:320] Caches are synced for node config
	I0729 18:27:14.141031       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] <==
	I0729 18:27:09.715655       1 serving.go:380] Generated self-signed cert in-memory
	W0729 18:27:12.463128       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 18:27:12.463221       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 18:27:12.463233       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 18:27:12.463242       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 18:27:12.555549       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 18:27:12.555642       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:27:12.557333       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 18:27:12.559027       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 18:27:12.564703       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 18:27:12.559047       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 18:27:12.664961       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 18:38:08 default-k8s-diff-port-502055 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:38:08 default-k8s-diff-port-502055 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:38:16 default-k8s-diff-port-502055 kubelet[943]: E0729 18:38:16.213049     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:38:27 default-k8s-diff-port-502055 kubelet[943]: E0729 18:38:27.212314     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:38:38 default-k8s-diff-port-502055 kubelet[943]: E0729 18:38:38.212678     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:38:50 default-k8s-diff-port-502055 kubelet[943]: E0729 18:38:50.212273     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:39:03 default-k8s-diff-port-502055 kubelet[943]: E0729 18:39:03.212827     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:39:08 default-k8s-diff-port-502055 kubelet[943]: E0729 18:39:08.245834     943 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:39:08 default-k8s-diff-port-502055 kubelet[943]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:39:08 default-k8s-diff-port-502055 kubelet[943]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:39:08 default-k8s-diff-port-502055 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:39:08 default-k8s-diff-port-502055 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:39:18 default-k8s-diff-port-502055 kubelet[943]: E0729 18:39:18.213555     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:39:29 default-k8s-diff-port-502055 kubelet[943]: E0729 18:39:29.212059     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:39:40 default-k8s-diff-port-502055 kubelet[943]: E0729 18:39:40.211823     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:39:51 default-k8s-diff-port-502055 kubelet[943]: E0729 18:39:51.212119     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:40:03 default-k8s-diff-port-502055 kubelet[943]: E0729 18:40:03.211791     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:40:08 default-k8s-diff-port-502055 kubelet[943]: E0729 18:40:08.244299     943 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:40:08 default-k8s-diff-port-502055 kubelet[943]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:40:08 default-k8s-diff-port-502055 kubelet[943]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:40:08 default-k8s-diff-port-502055 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:40:08 default-k8s-diff-port-502055 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:40:17 default-k8s-diff-port-502055 kubelet[943]: E0729 18:40:17.212586     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:40:28 default-k8s-diff-port-502055 kubelet[943]: E0729 18:40:28.214017     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:40:42 default-k8s-diff-port-502055 kubelet[943]: E0729 18:40:42.213384     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	
	
	==> storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] <==
	I0729 18:27:13.812162       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 18:27:43.820179       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] <==
	I0729 18:27:44.733703       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 18:27:44.748272       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 18:27:44.748396       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 18:28:02.151183       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 18:28:02.151683       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"297350fb-9b3e-4dd8-b768-ae51a278f99d", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-502055_5b47818b-c1c3-4ab1-b16d-cadac8cf42d0 became leader
	I0729 18:28:02.151781       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-502055_5b47818b-c1c3-4ab1-b16d-cadac8cf42d0!
	I0729 18:28:02.252859       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-502055_5b47818b-c1c3-4ab1-b16d-cadac8cf42d0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-502055 -n default-k8s-diff-port-502055
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-502055 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-bm8tm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-502055 describe pod metrics-server-569cc877fc-bm8tm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-502055 describe pod metrics-server-569cc877fc-bm8tm: exit status 1 (63.213667ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-bm8tm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-502055 describe pod metrics-server-569cc877fc-bm8tm: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0729 18:32:50.517476   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-409322 -n embed-certs-409322
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 18:41:10.792397007 +0000 UTC m=+6327.459765302
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-409322 -n embed-certs-409322
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-409322 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-409322 logs -n 25: (1.980976001s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-729010 sudo cat                              | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo find                             | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo crio                             | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-729010                                       | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	| delete  | -p                                                     | disable-driver-mounts-603863 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | disable-driver-mounts-603863                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:19 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-888056             | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-888056                                   | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-409322            | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-502055  | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC |                     |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-386663        | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-888056                  | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-888056 --memory=2200                     | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:33 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-409322                 | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-502055       | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:31 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386663             | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:22:47
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:22:47.218965   78080 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:22:47.219209   78080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:47.219217   78080 out.go:304] Setting ErrFile to fd 2...
	I0729 18:22:47.219222   78080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:47.219370   78080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:22:47.219863   78080 out.go:298] Setting JSON to false
	I0729 18:22:47.220726   78080 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7519,"bootTime":1722269848,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:22:47.220777   78080 start.go:139] virtualization: kvm guest
	I0729 18:22:47.222804   78080 out.go:177] * [old-k8s-version-386663] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:22:47.224119   78080 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 18:22:47.224173   78080 notify.go:220] Checking for updates...
	I0729 18:22:47.226449   78080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:22:47.227676   78080 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:22:47.228809   78080 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:22:47.229914   78080 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:22:47.230906   78080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:22:47.232363   78080 config.go:182] Loaded profile config "old-k8s-version-386663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:22:47.232750   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:47.232814   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:47.247542   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44723
	I0729 18:22:47.247909   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:47.248418   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:22:47.248436   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:47.248786   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:47.248965   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:22:47.250635   78080 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 18:22:47.251760   78080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:22:47.252055   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:47.252098   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:47.266291   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35843
	I0729 18:22:47.266672   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:47.267136   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:22:47.267157   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:47.267492   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:47.267662   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:22:47.303335   78080 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 18:22:47.304503   78080 start.go:297] selected driver: kvm2
	I0729 18:22:47.304513   78080 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:47.304607   78080 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:22:47.305291   78080 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:47.305360   78080 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:22:47.319918   78080 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:22:47.320315   78080 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:22:47.320341   78080 cni.go:84] Creating CNI manager for ""
	I0729 18:22:47.320349   78080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:22:47.320386   78080 start.go:340] cluster config:
	{Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:47.320480   78080 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:47.322357   78080 out.go:177] * Starting "old-k8s-version-386663" primary control-plane node in "old-k8s-version-386663" cluster
	I0729 18:22:43.378634   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:46.450644   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:47.323622   78080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:22:47.323653   78080 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:22:47.323660   78080 cache.go:56] Caching tarball of preloaded images
	I0729 18:22:47.323740   78080 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:22:47.323761   78080 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 18:22:47.323849   78080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json ...
	I0729 18:22:47.324021   78080 start.go:360] acquireMachinesLock for old-k8s-version-386663: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:22:52.530551   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:55.602731   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:01.682636   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:04.754621   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:10.834616   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:13.906688   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:19.986655   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:23.059064   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:29.138659   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:32.210758   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:38.290665   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:41.362732   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:47.442637   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:50.514656   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:56.594611   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:59.666706   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:05.746649   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:08.818685   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:14.898642   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:17.970619   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:24.050664   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:27.122664   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:33.202629   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:36.274678   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:42.354674   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:45.426704   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:51.506670   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:54.578602   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:00.658683   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:03.730663   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:09.810619   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:12.882598   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:18.962612   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:22.034673   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:28.114638   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:31.186598   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:37.266642   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:40.338599   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:46.418679   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:49.490705   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:55.570690   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:58.642719   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:04.722643   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:07.794711   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:13.874638   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:16.946806   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:19.951345   77627 start.go:364] duration metric: took 4m10.060086709s to acquireMachinesLock for "embed-certs-409322"
	I0729 18:26:19.951406   77627 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:19.951414   77627 fix.go:54] fixHost starting: 
	I0729 18:26:19.951732   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:19.951761   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:19.967602   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41827
	I0729 18:26:19.968062   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:19.968486   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:26:19.968505   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:19.968809   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:19.969009   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:19.969135   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:26:19.970757   77627 fix.go:112] recreateIfNeeded on embed-certs-409322: state=Stopped err=<nil>
	I0729 18:26:19.970784   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	W0729 18:26:19.970931   77627 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:19.972631   77627 out.go:177] * Restarting existing kvm2 VM for "embed-certs-409322" ...
	I0729 18:26:19.948656   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:19.948718   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:26:19.949066   77394 buildroot.go:166] provisioning hostname "no-preload-888056"
	I0729 18:26:19.949096   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:26:19.949286   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:26:19.951194   77394 machine.go:97] duration metric: took 4m37.435248922s to provisionDockerMachine
	I0729 18:26:19.951238   77394 fix.go:56] duration metric: took 4m37.45552986s for fixHost
	I0729 18:26:19.951246   77394 start.go:83] releasing machines lock for "no-preload-888056", held for 4m37.455571504s
	W0729 18:26:19.951284   77394 start.go:714] error starting host: provision: host is not running
	W0729 18:26:19.951381   77394 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 18:26:19.951389   77394 start.go:729] Will try again in 5 seconds ...
	I0729 18:26:19.973786   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Start
	I0729 18:26:19.973923   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring networks are active...
	I0729 18:26:19.974594   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring network default is active
	I0729 18:26:19.974930   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring network mk-embed-certs-409322 is active
	I0729 18:26:19.975500   77627 main.go:141] libmachine: (embed-certs-409322) Getting domain xml...
	I0729 18:26:19.976135   77627 main.go:141] libmachine: (embed-certs-409322) Creating domain...
	I0729 18:26:21.186491   77627 main.go:141] libmachine: (embed-certs-409322) Waiting to get IP...
	I0729 18:26:21.187403   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.187857   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.187924   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.187843   78811 retry.go:31] will retry after 218.694883ms: waiting for machine to come up
	I0729 18:26:21.408404   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.408843   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.408872   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.408795   78811 retry.go:31] will retry after 335.138992ms: waiting for machine to come up
	I0729 18:26:21.745329   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.745805   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.745828   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.745759   78811 retry.go:31] will retry after 317.831297ms: waiting for machine to come up
	I0729 18:26:22.065446   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:22.065985   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:22.066024   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:22.065948   78811 retry.go:31] will retry after 557.945634ms: waiting for machine to come up
	I0729 18:26:22.625624   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:22.626020   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:22.626047   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:22.625967   78811 retry.go:31] will retry after 739.991425ms: waiting for machine to come up
	I0729 18:26:23.368166   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:23.368523   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:23.368549   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:23.368477   78811 retry.go:31] will retry after 878.16479ms: waiting for machine to come up
	I0729 18:26:24.248467   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:24.248871   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:24.248895   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:24.248813   78811 retry.go:31] will retry after 1.022542608s: waiting for machine to come up
	I0729 18:26:24.952911   77394 start.go:360] acquireMachinesLock for no-preload-888056: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:26:25.273470   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:25.273886   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:25.273913   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:25.273829   78811 retry.go:31] will retry after 1.313344307s: waiting for machine to come up
	I0729 18:26:26.589378   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:26.589805   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:26.589852   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:26.589769   78811 retry.go:31] will retry after 1.553795128s: waiting for machine to come up
	I0729 18:26:28.145271   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:28.145680   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:28.145704   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:28.145643   78811 retry.go:31] will retry after 1.859680601s: waiting for machine to come up
	I0729 18:26:30.007588   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:30.007988   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:30.008018   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:30.007937   78811 retry.go:31] will retry after 1.754805493s: waiting for machine to come up
	I0729 18:26:31.764527   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:31.765077   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:31.765107   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:31.765030   78811 retry.go:31] will retry after 2.769383357s: waiting for machine to come up
	I0729 18:26:34.536479   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:34.536972   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:34.537007   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:34.536921   78811 retry.go:31] will retry after 3.355218512s: waiting for machine to come up
	I0729 18:26:39.563371   77859 start.go:364] duration metric: took 3m59.712120998s to acquireMachinesLock for "default-k8s-diff-port-502055"
	I0729 18:26:39.563440   77859 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:39.563452   77859 fix.go:54] fixHost starting: 
	I0729 18:26:39.563871   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:39.563914   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:39.580545   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0729 18:26:39.580962   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:39.581492   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:26:39.581518   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:39.581864   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:39.582096   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:39.582290   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:26:39.583857   77859 fix.go:112] recreateIfNeeded on default-k8s-diff-port-502055: state=Stopped err=<nil>
	I0729 18:26:39.583883   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	W0729 18:26:39.584062   77859 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:39.586281   77859 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-502055" ...
	I0729 18:26:39.587651   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Start
	I0729 18:26:39.587814   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring networks are active...
	I0729 18:26:39.588499   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring network default is active
	I0729 18:26:39.588864   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring network mk-default-k8s-diff-port-502055 is active
	I0729 18:26:39.589616   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Getting domain xml...
	I0729 18:26:39.590433   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Creating domain...
	I0729 18:26:37.896070   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.896640   77627 main.go:141] libmachine: (embed-certs-409322) Found IP for machine: 192.168.39.58
	I0729 18:26:37.896664   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has current primary IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.896670   77627 main.go:141] libmachine: (embed-certs-409322) Reserving static IP address...
	I0729 18:26:37.897129   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "embed-certs-409322", mac: "52:54:00:22:9f:57", ip: "192.168.39.58"} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:37.897157   77627 main.go:141] libmachine: (embed-certs-409322) Reserved static IP address: 192.168.39.58
	I0729 18:26:37.897173   77627 main.go:141] libmachine: (embed-certs-409322) DBG | skip adding static IP to network mk-embed-certs-409322 - found existing host DHCP lease matching {name: "embed-certs-409322", mac: "52:54:00:22:9f:57", ip: "192.168.39.58"}
	I0729 18:26:37.897189   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Getting to WaitForSSH function...
	I0729 18:26:37.897206   77627 main.go:141] libmachine: (embed-certs-409322) Waiting for SSH to be available...
	I0729 18:26:37.899216   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.899595   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:37.899616   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.899785   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Using SSH client type: external
	I0729 18:26:37.899808   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa (-rw-------)
	I0729 18:26:37.899845   77627 main.go:141] libmachine: (embed-certs-409322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:26:37.899858   77627 main.go:141] libmachine: (embed-certs-409322) DBG | About to run SSH command:
	I0729 18:26:37.899872   77627 main.go:141] libmachine: (embed-certs-409322) DBG | exit 0
	I0729 18:26:38.026619   77627 main.go:141] libmachine: (embed-certs-409322) DBG | SSH cmd err, output: <nil>: 
	I0729 18:26:38.027028   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetConfigRaw
	I0729 18:26:38.027621   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:38.030532   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.030963   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.030989   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.031243   77627 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/config.json ...
	I0729 18:26:38.031413   77627 machine.go:94] provisionDockerMachine start ...
	I0729 18:26:38.031437   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:38.031642   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.033867   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.034218   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.034251   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.034380   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.034545   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.034682   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.034807   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.034992   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.035175   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.035185   77627 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:26:38.142565   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:26:38.142595   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.142842   77627 buildroot.go:166] provisioning hostname "embed-certs-409322"
	I0729 18:26:38.142872   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.143071   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.145625   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.145951   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.145974   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.146217   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.146423   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.146577   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.146730   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.146861   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.147046   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.147065   77627 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-409322 && echo "embed-certs-409322" | sudo tee /etc/hostname
	I0729 18:26:38.264341   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-409322
	
	I0729 18:26:38.264368   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.266846   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.267144   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.267171   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.267328   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.267488   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.267660   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.267757   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.267936   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.268106   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.268122   77627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-409322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-409322/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-409322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:26:38.383748   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:38.383779   77627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:26:38.383805   77627 buildroot.go:174] setting up certificates
	I0729 18:26:38.383817   77627 provision.go:84] configureAuth start
	I0729 18:26:38.383827   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.384110   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:38.386936   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.387320   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.387348   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.387508   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.389550   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.389871   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.389910   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.389978   77627 provision.go:143] copyHostCerts
	I0729 18:26:38.390039   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:26:38.390052   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:26:38.390137   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:26:38.390257   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:26:38.390268   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:26:38.390308   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:26:38.390406   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:26:38.390416   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:26:38.390456   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:26:38.390526   77627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.embed-certs-409322 san=[127.0.0.1 192.168.39.58 embed-certs-409322 localhost minikube]
	I0729 18:26:38.903674   77627 provision.go:177] copyRemoteCerts
	I0729 18:26:38.903758   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:26:38.903791   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.906662   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.906984   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.907018   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.907171   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.907360   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.907543   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.907667   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:38.992373   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 18:26:39.016465   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:26:39.039598   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:26:39.062415   77627 provision.go:87] duration metric: took 678.589364ms to configureAuth
	I0729 18:26:39.062443   77627 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:26:39.062622   77627 config.go:182] Loaded profile config "embed-certs-409322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:26:39.062696   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.065308   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.065703   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.065728   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.065902   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.066076   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.066244   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.066403   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.066553   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:39.066743   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:39.066759   77627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:26:39.326153   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:26:39.326176   77627 machine.go:97] duration metric: took 1.29475208s to provisionDockerMachine
	I0729 18:26:39.326186   77627 start.go:293] postStartSetup for "embed-certs-409322" (driver="kvm2")
	I0729 18:26:39.326195   77627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:26:39.326209   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.326603   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:26:39.326637   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.329049   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.329448   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.329476   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.329616   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.329822   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.330022   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.330186   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.413084   77627 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:26:39.417438   77627 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:26:39.417462   77627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:26:39.417535   77627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:26:39.417626   77627 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:26:39.417749   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:26:39.427256   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:39.451330   77627 start.go:296] duration metric: took 125.132889ms for postStartSetup
	I0729 18:26:39.451362   77627 fix.go:56] duration metric: took 19.499949606s for fixHost
	I0729 18:26:39.451380   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.453750   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.454047   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.454072   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.454237   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.454416   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.454570   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.454698   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.454864   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:39.455069   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:39.455080   77627 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:26:39.563211   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277599.531173461
	
	I0729 18:26:39.563238   77627 fix.go:216] guest clock: 1722277599.531173461
	I0729 18:26:39.563248   77627 fix.go:229] Guest: 2024-07-29 18:26:39.531173461 +0000 UTC Remote: 2024-07-29 18:26:39.451365859 +0000 UTC m=+269.697720486 (delta=79.807602ms)
	I0729 18:26:39.563278   77627 fix.go:200] guest clock delta is within tolerance: 79.807602ms
	I0729 18:26:39.563287   77627 start.go:83] releasing machines lock for "embed-certs-409322", held for 19.611902888s
	I0729 18:26:39.563318   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.563562   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:39.566225   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.566549   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.566575   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.566766   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567227   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567378   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567460   77627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:26:39.567501   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.567565   77627 ssh_runner.go:195] Run: cat /version.json
	I0729 18:26:39.567593   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.570113   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570330   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570536   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.570558   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570747   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.570754   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.570776   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570883   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.571004   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.571113   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.571211   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.571330   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.571438   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.571478   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.651235   77627 ssh_runner.go:195] Run: systemctl --version
	I0729 18:26:39.677383   77627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:26:39.824036   77627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:26:39.830027   77627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:26:39.830103   77627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:26:39.845939   77627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:26:39.845963   77627 start.go:495] detecting cgroup driver to use...
	I0729 18:26:39.846019   77627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:26:39.862867   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:26:39.878060   77627 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:26:39.878152   77627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:26:39.892471   77627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:26:39.906690   77627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:26:40.039725   77627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:26:40.201419   77627 docker.go:233] disabling docker service ...
	I0729 18:26:40.201489   77627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:26:40.222454   77627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:26:40.237523   77627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:26:40.371463   77627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:26:40.499676   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:26:40.514068   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:26:40.534051   77627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:26:40.534114   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.545364   77627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:26:40.545458   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.557113   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.568215   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.579433   77627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:26:40.591005   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.601933   77627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.621097   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.631960   77627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:26:40.642308   77627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:26:40.642383   77627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:26:40.656469   77627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:26:40.671251   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:26:40.784289   77627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:26:40.933837   77627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:26:40.933910   77627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:26:40.939031   77627 start.go:563] Will wait 60s for crictl version
	I0729 18:26:40.939086   77627 ssh_runner.go:195] Run: which crictl
	I0729 18:26:40.943166   77627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:26:40.985673   77627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:26:40.985753   77627 ssh_runner.go:195] Run: crio --version
	I0729 18:26:41.013973   77627 ssh_runner.go:195] Run: crio --version
	I0729 18:26:41.046080   77627 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:26:40.822462   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting to get IP...
	I0729 18:26:40.823526   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:40.823948   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:40.824000   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:40.823920   78947 retry.go:31] will retry after 262.026124ms: waiting for machine to come up
	I0729 18:26:41.087492   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.087961   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.087991   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.087913   78947 retry.go:31] will retry after 380.066984ms: waiting for machine to come up
	I0729 18:26:41.469728   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.470215   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.470244   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.470181   78947 retry.go:31] will retry after 293.069239ms: waiting for machine to come up
	I0729 18:26:41.764797   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.765277   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.765303   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.765228   78947 retry.go:31] will retry after 491.247116ms: waiting for machine to come up
	I0729 18:26:42.257741   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.258247   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.258275   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:42.258220   78947 retry.go:31] will retry after 693.832082ms: waiting for machine to come up
	I0729 18:26:42.953375   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.954146   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.954169   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:42.954051   78947 retry.go:31] will retry after 710.005115ms: waiting for machine to come up
	I0729 18:26:43.666068   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:43.666478   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:43.666504   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:43.666438   78947 retry.go:31] will retry after 1.077324053s: waiting for machine to come up
	I0729 18:26:41.047322   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:41.049993   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:41.050394   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:41.050433   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:41.050630   77627 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:26:41.054805   77627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:26:41.066926   77627 kubeadm.go:883] updating cluster {Name:embed-certs-409322 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:26:41.067053   77627 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:26:41.067115   77627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:26:41.103417   77627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:26:41.103489   77627 ssh_runner.go:195] Run: which lz4
	I0729 18:26:41.107793   77627 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:26:41.112161   77627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:26:41.112192   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:26:42.559564   77627 crio.go:462] duration metric: took 1.451801292s to copy over tarball
	I0729 18:26:42.559679   77627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:26:44.759513   77627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199801336s)
	I0729 18:26:44.759543   77627 crio.go:469] duration metric: took 2.199942615s to extract the tarball
	I0729 18:26:44.759554   77627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:26:44.744984   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:44.745450   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:44.745477   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:44.745403   78947 retry.go:31] will retry after 1.064257005s: waiting for machine to come up
	I0729 18:26:45.811414   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:45.811840   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:45.811880   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:45.811799   78947 retry.go:31] will retry after 1.30236943s: waiting for machine to come up
	I0729 18:26:47.116252   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:47.116668   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:47.116728   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:47.116647   78947 retry.go:31] will retry after 1.424333691s: waiting for machine to come up
	I0729 18:26:48.543481   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:48.543945   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:48.543973   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:48.543894   78947 retry.go:31] will retry after 2.106061522s: waiting for machine to come up
	I0729 18:26:44.798609   77627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:26:44.848236   77627 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:26:44.848257   77627 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:26:44.848265   77627 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.30.3 crio true true} ...
	I0729 18:26:44.848355   77627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-409322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:26:44.848415   77627 ssh_runner.go:195] Run: crio config
	I0729 18:26:44.901558   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:26:44.901584   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:26:44.901597   77627 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:26:44.901625   77627 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-409322 NodeName:embed-certs-409322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:26:44.901807   77627 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-409322"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:26:44.901875   77627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:26:44.912290   77627 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:26:44.912351   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:26:44.921801   77627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 18:26:44.940473   77627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:26:44.958445   77627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 18:26:44.976890   77627 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I0729 18:26:44.980974   77627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:26:44.994793   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:26:45.120453   77627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:26:45.138398   77627 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322 for IP: 192.168.39.58
	I0729 18:26:45.138419   77627 certs.go:194] generating shared ca certs ...
	I0729 18:26:45.138438   77627 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:26:45.138592   77627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:26:45.138643   77627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:26:45.138657   77627 certs.go:256] generating profile certs ...
	I0729 18:26:45.138751   77627 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/client.key
	I0729 18:26:45.138823   77627 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.key.4af4a6b9
	I0729 18:26:45.138889   77627 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.key
	I0729 18:26:45.139034   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:26:45.139074   77627 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:26:45.139088   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:26:45.139122   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:26:45.139161   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:26:45.139200   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:26:45.139305   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:45.139979   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:26:45.177194   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:26:45.206349   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:26:45.242291   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:26:45.277062   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 18:26:45.312447   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:26:45.345482   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:26:45.369151   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:26:45.394521   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:26:45.418579   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:26:45.443252   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:26:45.466770   77627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:26:45.484159   77627 ssh_runner.go:195] Run: openssl version
	I0729 18:26:45.490045   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:26:45.501166   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.505930   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.505988   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.511926   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:26:45.522860   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:26:45.533560   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.538411   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.538474   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.544485   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:26:45.555603   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:26:45.566407   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.570892   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.570944   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.576555   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:26:45.587780   77627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:26:45.592689   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:26:45.598981   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:26:45.604952   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:26:45.611225   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:26:45.617506   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:26:45.623744   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:26:45.629836   77627 kubeadm.go:392] StartCluster: {Name:embed-certs-409322 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:26:45.629947   77627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:26:45.630003   77627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:26:45.667768   77627 cri.go:89] found id: ""
	I0729 18:26:45.667853   77627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:26:45.678703   77627 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:26:45.678724   77627 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:26:45.678772   77627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:26:45.691979   77627 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:26:45.693237   77627 kubeconfig.go:125] found "embed-certs-409322" server: "https://192.168.39.58:8443"
	I0729 18:26:45.696093   77627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:26:45.708981   77627 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I0729 18:26:45.709017   77627 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:26:45.709030   77627 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:26:45.709088   77627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:26:45.748738   77627 cri.go:89] found id: ""
	I0729 18:26:45.748817   77627 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:26:45.775148   77627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:26:45.786631   77627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:26:45.786651   77627 kubeadm.go:157] found existing configuration files:
	
	I0729 18:26:45.786701   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:26:45.799453   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:26:45.799507   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:26:45.809691   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:26:45.819592   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:26:45.819638   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:26:45.832072   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:26:45.843769   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:26:45.843817   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:26:45.854649   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:26:45.863448   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:26:45.863504   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:26:45.872399   77627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:26:45.881992   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:46.012679   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.143076   77627 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.130359187s)
	I0729 18:26:47.143112   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.370854   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.446808   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.550087   77627 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:26:47.550191   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.050502   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.550499   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.608713   77627 api_server.go:72] duration metric: took 1.058625786s to wait for apiserver process to appear ...
	I0729 18:26:48.608745   77627 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:26:48.608773   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:51.829925   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:26:51.829963   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:26:51.829979   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:51.843474   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:26:51.843503   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:26:52.109882   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:52.117387   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:26:52.117415   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:26:52.608863   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:52.613809   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:26:52.613840   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:26:53.109430   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:53.115353   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I0729 18:26:53.122373   77627 api_server.go:141] control plane version: v1.30.3
	I0729 18:26:53.122411   77627 api_server.go:131] duration metric: took 4.513658045s to wait for apiserver health ...
	I0729 18:26:53.122420   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:26:53.122426   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:26:53.123807   77627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:26:50.651329   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:50.651724   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:50.651753   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:50.651678   78947 retry.go:31] will retry after 3.358167933s: waiting for machine to come up
	I0729 18:26:54.014102   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:54.014543   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:54.014576   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:54.014495   78947 retry.go:31] will retry after 4.372189125s: waiting for machine to come up
	I0729 18:26:53.124953   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:26:53.140970   77627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:26:53.179660   77627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:26:53.193885   77627 system_pods.go:59] 8 kube-system pods found
	I0729 18:26:53.193921   77627 system_pods.go:61] "coredns-7db6d8ff4d-vxvfc" [da2fd5a1-f57f-4374-99ee-9017e228176f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:26:53.193932   77627 system_pods.go:61] "etcd-embed-certs-409322" [3eca462f-6156-4858-a886-30d0d32faa35] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:26:53.193944   77627 system_pods.go:61] "kube-apiserver-embed-certs-409322" [4c6473c7-d7b8-4513-b800-7cab08748d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:26:53.193953   77627 system_pods.go:61] "kube-controller-manager-embed-certs-409322" [2dc47da0-3d24-49d8-91ae-13074468b423] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:26:53.193961   77627 system_pods.go:61] "kube-proxy-zf5jf" [a0b6fd82-d0b1-4821-a668-4cb6420b4860] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 18:26:53.193969   77627 system_pods.go:61] "kube-scheduler-embed-certs-409322" [ab422567-58e6-4f22-a7cf-391b35cc386c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:26:53.193977   77627 system_pods.go:61] "metrics-server-569cc877fc-flh27" [83d6c69c-200d-4ce2-80e9-b83ff5b6ebe9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:26:53.193989   77627 system_pods.go:61] "storage-provisioner" [73ff548f-26c3-4442-a9bd-bdac45261476] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 18:26:53.194002   77627 system_pods.go:74] duration metric: took 14.320361ms to wait for pod list to return data ...
	I0729 18:26:53.194014   77627 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:26:53.197826   77627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:26:53.197858   77627 node_conditions.go:123] node cpu capacity is 2
	I0729 18:26:53.197870   77627 node_conditions.go:105] duration metric: took 3.850077ms to run NodePressure ...
	I0729 18:26:53.197884   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:53.467868   77627 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:26:53.471886   77627 kubeadm.go:739] kubelet initialised
	I0729 18:26:53.471905   77627 kubeadm.go:740] duration metric: took 4.016417ms waiting for restarted kubelet to initialise ...
	I0729 18:26:53.471912   77627 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:26:53.476695   77627 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.480449   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.480481   77627 pod_ready.go:81] duration metric: took 3.766ms for pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.480491   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.480501   77627 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.484712   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "etcd-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.484739   77627 pod_ready.go:81] duration metric: took 4.228077ms for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.484750   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "etcd-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.484759   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.488510   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.488532   77627 pod_ready.go:81] duration metric: took 3.76371ms for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.488539   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.488545   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:58.387940   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.388358   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Found IP for machine: 192.168.61.244
	I0729 18:26:58.388383   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has current primary IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.388396   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Reserving static IP address...
	I0729 18:26:58.388794   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-502055", mac: "52:54:00:ae:63:e1", ip: "192.168.61.244"} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.388826   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Reserved static IP address: 192.168.61.244
	I0729 18:26:58.388848   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | skip adding static IP to network mk-default-k8s-diff-port-502055 - found existing host DHCP lease matching {name: "default-k8s-diff-port-502055", mac: "52:54:00:ae:63:e1", ip: "192.168.61.244"}
	I0729 18:26:58.388873   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for SSH to be available...
	I0729 18:26:58.388894   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Getting to WaitForSSH function...
	I0729 18:26:58.390937   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.391281   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.391319   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.391381   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Using SSH client type: external
	I0729 18:26:58.391408   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa (-rw-------)
	I0729 18:26:58.391457   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:26:58.391490   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | About to run SSH command:
	I0729 18:26:58.391511   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | exit 0
	I0729 18:26:58.518399   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | SSH cmd err, output: <nil>: 
	I0729 18:26:58.518782   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetConfigRaw
	I0729 18:26:58.519492   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:58.522245   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.522580   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.522615   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.522862   77859 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/config.json ...
	I0729 18:26:58.523037   77859 machine.go:94] provisionDockerMachine start ...
	I0729 18:26:58.523053   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:58.523258   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.525654   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.525998   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.526018   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.526185   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.526351   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.526555   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.526705   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.526874   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.527066   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.527079   77859 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:26:58.635267   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:26:58.635302   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.635524   77859 buildroot.go:166] provisioning hostname "default-k8s-diff-port-502055"
	I0729 18:26:58.635550   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.635789   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.638770   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.639235   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.639265   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.639371   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.639564   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.639729   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.639865   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.640048   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.640227   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.640245   77859 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-502055 && echo "default-k8s-diff-port-502055" | sudo tee /etc/hostname
	I0729 18:26:58.760577   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-502055
	
	I0729 18:26:58.760603   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.763294   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.763591   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.763625   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.763766   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.763970   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.764159   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.764311   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.764480   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.764641   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.764659   77859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-502055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-502055/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-502055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:26:58.879366   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:58.879400   77859 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:26:58.879440   77859 buildroot.go:174] setting up certificates
	I0729 18:26:58.879451   77859 provision.go:84] configureAuth start
	I0729 18:26:58.879463   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.879735   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:58.882335   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.882652   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.882680   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.882848   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.885023   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.885313   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.885339   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.885433   77859 provision.go:143] copyHostCerts
	I0729 18:26:58.885479   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:26:58.885488   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:26:58.885544   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:26:58.885633   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:26:58.885641   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:26:58.885660   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:26:58.885709   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:26:58.885716   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:26:58.885733   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:26:58.885783   77859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-502055 san=[127.0.0.1 192.168.61.244 default-k8s-diff-port-502055 localhost minikube]
	I0729 18:26:59.130657   77859 provision.go:177] copyRemoteCerts
	I0729 18:26:59.130724   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:26:59.130749   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.133536   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.133898   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.133922   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.134079   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.134260   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.134421   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.134530   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.216614   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 18:26:59.240540   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:26:59.267350   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:26:59.294003   77859 provision.go:87] duration metric: took 414.539559ms to configureAuth
	I0729 18:26:59.294032   77859 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:26:59.294222   77859 config.go:182] Loaded profile config "default-k8s-diff-port-502055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:26:59.294293   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.296911   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.297285   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.297311   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.297450   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.297656   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.297804   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.297935   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.298102   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:59.298265   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:59.298281   77859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:26:59.557084   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:26:59.557131   77859 machine.go:97] duration metric: took 1.034080964s to provisionDockerMachine
	I0729 18:26:59.557148   77859 start.go:293] postStartSetup for "default-k8s-diff-port-502055" (driver="kvm2")
	I0729 18:26:59.557165   77859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:26:59.557191   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.557496   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:26:59.557529   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.559962   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.560255   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.560276   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.560461   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.560635   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.560798   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.560953   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.645623   77859 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:26:59.650416   77859 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:26:59.650447   77859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:26:59.650531   77859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:26:59.650624   77859 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:26:59.650730   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:26:59.660864   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:59.685728   77859 start.go:296] duration metric: took 128.564534ms for postStartSetup
	I0729 18:26:59.685767   77859 fix.go:56] duration metric: took 20.122314731s for fixHost
	I0729 18:26:59.685791   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.688401   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.688773   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.688801   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.688978   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.689157   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.689293   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.689401   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.689551   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:59.689712   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:59.689722   77859 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:26:55.494570   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:26:57.495784   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:26:59.799712   78080 start.go:364] duration metric: took 4m12.475660562s to acquireMachinesLock for "old-k8s-version-386663"
	I0729 18:26:59.799786   78080 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:59.799796   78080 fix.go:54] fixHost starting: 
	I0729 18:26:59.800184   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:59.800215   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:59.816885   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37963
	I0729 18:26:59.817336   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:59.817822   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:26:59.817851   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:59.818283   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:59.818505   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:26:59.818671   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetState
	I0729 18:26:59.820232   78080 fix.go:112] recreateIfNeeded on old-k8s-version-386663: state=Stopped err=<nil>
	I0729 18:26:59.820254   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	W0729 18:26:59.820426   78080 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:59.822140   78080 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-386663" ...
	I0729 18:26:59.799573   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277619.755982716
	
	I0729 18:26:59.799603   77859 fix.go:216] guest clock: 1722277619.755982716
	I0729 18:26:59.799614   77859 fix.go:229] Guest: 2024-07-29 18:26:59.755982716 +0000 UTC Remote: 2024-07-29 18:26:59.685771603 +0000 UTC m=+259.980298680 (delta=70.211113ms)
	I0729 18:26:59.799637   77859 fix.go:200] guest clock delta is within tolerance: 70.211113ms
	I0729 18:26:59.799641   77859 start.go:83] releasing machines lock for "default-k8s-diff-port-502055", held for 20.236230068s
	I0729 18:26:59.799672   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.799944   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:59.802636   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.802983   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.803013   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.803248   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.803740   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.803927   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.804023   77859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:26:59.804070   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.804193   77859 ssh_runner.go:195] Run: cat /version.json
	I0729 18:26:59.804229   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.807037   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807117   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807395   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.807435   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807528   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.807547   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.807565   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807708   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.807717   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.807910   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.807936   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.808043   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.808098   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.808244   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.920371   77859 ssh_runner.go:195] Run: systemctl --version
	I0729 18:26:59.926620   77859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:00.072161   77859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:00.079273   77859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:00.079340   77859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:00.096528   77859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:00.096550   77859 start.go:495] detecting cgroup driver to use...
	I0729 18:27:00.096610   77859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:00.113690   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:00.129058   77859 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:00.129126   77859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:00.143930   77859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:00.158085   77859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:00.296398   77859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:00.482313   77859 docker.go:233] disabling docker service ...
	I0729 18:27:00.482459   77859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:00.501504   77859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:00.520932   77859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:00.657805   77859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:00.792064   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:00.807790   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:00.827373   77859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:27:00.827423   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.838281   77859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:00.838340   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.849533   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.860820   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.872359   77859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:00.883904   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.895589   77859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.914639   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.926278   77859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:00.936329   77859 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:00.936383   77859 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:00.951219   77859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:00.966530   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:01.086665   77859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:01.233627   77859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:01.233703   77859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:01.241055   77859 start.go:563] Will wait 60s for crictl version
	I0729 18:27:01.241122   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:27:01.244875   77859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:01.284013   77859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:01.284103   77859 ssh_runner.go:195] Run: crio --version
	I0729 18:27:01.315493   77859 ssh_runner.go:195] Run: crio --version
	I0729 18:27:01.348781   77859 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:26:59.823421   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .Start
	I0729 18:26:59.823575   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring networks are active...
	I0729 18:26:59.824264   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring network default is active
	I0729 18:26:59.824641   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring network mk-old-k8s-version-386663 is active
	I0729 18:26:59.825024   78080 main.go:141] libmachine: (old-k8s-version-386663) Getting domain xml...
	I0729 18:26:59.825885   78080 main.go:141] libmachine: (old-k8s-version-386663) Creating domain...
	I0729 18:27:01.104265   78080 main.go:141] libmachine: (old-k8s-version-386663) Waiting to get IP...
	I0729 18:27:01.105349   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.105790   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.105836   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.105761   79098 retry.go:31] will retry after 308.255094ms: waiting for machine to come up
	I0729 18:27:01.415431   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.415999   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.416030   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.415952   79098 retry.go:31] will retry after 236.525723ms: waiting for machine to come up
	I0729 18:27:01.654767   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.655279   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.655312   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.655247   79098 retry.go:31] will retry after 311.010394ms: waiting for machine to come up
	I0729 18:27:01.967850   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.968374   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.968404   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.968333   79098 retry.go:31] will retry after 468.477549ms: waiting for machine to come up
	I0729 18:27:01.350059   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:27:01.352945   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:01.353398   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:01.353429   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:01.353630   77859 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:01.357955   77859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:01.371879   77859 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-502055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:01.372034   77859 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:27:01.372100   77859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:01.412356   77859 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:27:01.412423   77859 ssh_runner.go:195] Run: which lz4
	I0729 18:27:01.417768   77859 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:27:01.422809   77859 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:27:01.422836   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:27:02.909800   77859 crio.go:462] duration metric: took 1.492088664s to copy over tarball
	I0729 18:27:02.909868   77859 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:26:59.995351   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:01.999130   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:04.012357   77627 pod_ready.go:92] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.012385   77627 pod_ready.go:81] duration metric: took 10.523832262s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.012398   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zf5jf" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.025409   77627 pod_ready.go:92] pod "kube-proxy-zf5jf" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.025448   77627 pod_ready.go:81] duration metric: took 13.042254ms for pod "kube-proxy-zf5jf" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.025461   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.036057   77627 pod_ready.go:92] pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.036078   77627 pod_ready.go:81] duration metric: took 10.608531ms for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.036090   77627 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:02.438066   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:02.438657   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:02.438686   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:02.438618   79098 retry.go:31] will retry after 601.056921ms: waiting for machine to come up
	I0729 18:27:03.041582   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:03.042097   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:03.042127   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:03.042040   79098 retry.go:31] will retry after 712.049848ms: waiting for machine to come up
	I0729 18:27:03.755536   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:03.756010   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:03.756040   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:03.755988   79098 retry.go:31] will retry after 1.092318096s: waiting for machine to come up
	I0729 18:27:04.849745   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:04.850202   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:04.850226   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:04.850147   79098 retry.go:31] will retry after 903.54457ms: waiting for machine to come up
	I0729 18:27:05.754781   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:05.755193   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:05.755218   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:05.755157   79098 retry.go:31] will retry after 1.693512671s: waiting for machine to come up
	I0729 18:27:05.188101   77859 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.27820184s)
	I0729 18:27:05.188132   77859 crio.go:469] duration metric: took 2.278304723s to extract the tarball
	I0729 18:27:05.188140   77859 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:27:05.227453   77859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:05.274530   77859 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:27:05.274560   77859 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:27:05.274571   77859 kubeadm.go:934] updating node { 192.168.61.244 8444 v1.30.3 crio true true} ...
	I0729 18:27:05.274708   77859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-502055 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:05.274788   77859 ssh_runner.go:195] Run: crio config
	I0729 18:27:05.320697   77859 cni.go:84] Creating CNI manager for ""
	I0729 18:27:05.320725   77859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:05.320741   77859 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:05.320774   77859 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.244 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-502055 NodeName:default-k8s-diff-port-502055 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:27:05.320948   77859 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.244
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-502055"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:05.321028   77859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:27:05.331541   77859 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:05.331609   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:05.341433   77859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 18:27:05.358696   77859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:27:05.376531   77859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 18:27:05.394349   77859 ssh_runner.go:195] Run: grep 192.168.61.244	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:05.398156   77859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:05.411839   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:05.561467   77859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:05.583184   77859 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055 for IP: 192.168.61.244
	I0729 18:27:05.583209   77859 certs.go:194] generating shared ca certs ...
	I0729 18:27:05.583251   77859 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:05.583406   77859 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:05.583460   77859 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:05.583473   77859 certs.go:256] generating profile certs ...
	I0729 18:27:05.583577   77859 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/client.key
	I0729 18:27:05.583642   77859 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.key.2edc4448
	I0729 18:27:05.583692   77859 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.key
	I0729 18:27:05.583835   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:05.583872   77859 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:05.583886   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:05.583917   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:05.583957   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:05.583991   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:05.584048   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:05.584726   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:05.624996   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:05.670153   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:05.715354   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:05.743807   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 18:27:05.777366   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:05.802152   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:05.826974   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:27:05.850417   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:05.873185   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:05.899387   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:05.927963   77859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:05.947817   77859 ssh_runner.go:195] Run: openssl version
	I0729 18:27:05.955635   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:05.969765   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.974559   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.974606   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.980557   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:05.991819   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:06.004961   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.009999   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.010074   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.016045   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:06.027698   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:06.039648   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.045057   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.045130   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.051127   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:06.062761   77859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:06.068832   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:06.076652   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:06.084517   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:06.091125   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:06.097346   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:06.103428   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:06.109312   77859 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-502055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:06.109403   77859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:06.109440   77859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:06.153439   77859 cri.go:89] found id: ""
	I0729 18:27:06.153528   77859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:06.166412   77859 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:06.166434   77859 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:06.166486   77859 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:06.183064   77859 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:06.184168   77859 kubeconfig.go:125] found "default-k8s-diff-port-502055" server: "https://192.168.61.244:8444"
	I0729 18:27:06.186283   77859 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:06.197418   77859 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.244
	I0729 18:27:06.197444   77859 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:06.197454   77859 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:06.197506   77859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:06.237753   77859 cri.go:89] found id: ""
	I0729 18:27:06.237839   77859 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:06.257323   77859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:06.269157   77859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:06.269176   77859 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:06.269229   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 18:27:06.279313   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:06.279369   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:06.292141   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 18:27:06.303961   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:06.304028   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:06.316051   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 18:27:06.328004   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:06.328064   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:06.340357   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 18:27:06.352021   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:06.352068   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:06.364479   77859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:06.375313   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:06.498692   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:07.853845   77859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.355105254s)
	I0729 18:27:07.853882   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.069616   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.144574   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.225236   77859 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:08.225336   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:08.725789   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:09.226271   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:09.270268   77859 api_server.go:72] duration metric: took 1.045028259s to wait for apiserver process to appear ...
	I0729 18:27:09.270298   77859 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:27:09.270320   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:09.270877   77859 api_server.go:269] stopped: https://192.168.61.244:8444/healthz: Get "https://192.168.61.244:8444/healthz": dial tcp 192.168.61.244:8444: connect: connection refused
	I0729 18:27:06.043838   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:08.044382   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:07.451087   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:07.451659   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:07.451688   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:07.451607   79098 retry.go:31] will retry after 1.734643072s: waiting for machine to come up
	I0729 18:27:09.188407   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:09.188963   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:09.188997   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:09.188900   79098 retry.go:31] will retry after 2.010973572s: waiting for machine to come up
	I0729 18:27:11.201171   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:11.201586   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:11.201620   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:11.201535   79098 retry.go:31] will retry after 3.178533437s: waiting for machine to come up
	I0729 18:27:09.771273   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.506136   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:27:12.506166   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:27:12.506179   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.518847   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:27:12.518881   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:27:12.771281   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.775798   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:27:12.775832   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:27:13.270383   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:13.281935   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:27:13.281975   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:27:13.770440   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:13.776004   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 200:
	ok
	I0729 18:27:13.783210   77859 api_server.go:141] control plane version: v1.30.3
	I0729 18:27:13.783237   77859 api_server.go:131] duration metric: took 4.512933596s to wait for apiserver health ...
	I0729 18:27:13.783247   77859 cni.go:84] Creating CNI manager for ""
	I0729 18:27:13.783253   77859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:13.785148   77859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:27:13.786485   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:27:13.814986   77859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:27:13.860557   77859 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:27:13.872823   77859 system_pods.go:59] 8 kube-system pods found
	I0729 18:27:13.872864   77859 system_pods.go:61] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:27:13.872871   77859 system_pods.go:61] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:27:13.872879   77859 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:27:13.872885   77859 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:27:13.872891   77859 system_pods.go:61] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 18:27:13.872898   77859 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:27:13.872903   77859 system_pods.go:61] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:27:13.872912   77859 system_pods.go:61] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 18:27:13.872920   77859 system_pods.go:74] duration metric: took 12.342162ms to wait for pod list to return data ...
	I0729 18:27:13.872929   77859 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:27:13.879353   77859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:27:13.879384   77859 node_conditions.go:123] node cpu capacity is 2
	I0729 18:27:13.879396   77859 node_conditions.go:105] duration metric: took 6.459994ms to run NodePressure ...
	I0729 18:27:13.879416   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:14.172203   77859 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:27:14.178467   77859 kubeadm.go:739] kubelet initialised
	I0729 18:27:14.178490   77859 kubeadm.go:740] duration metric: took 6.259862ms waiting for restarted kubelet to initialise ...
	I0729 18:27:14.178499   77859 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:14.184872   77859 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.190847   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.190871   77859 pod_ready.go:81] duration metric: took 5.974917ms for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.190879   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.190886   77859 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.195570   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.195593   77859 pod_ready.go:81] duration metric: took 4.699847ms for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.195603   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.195610   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.199460   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.199480   77859 pod_ready.go:81] duration metric: took 3.863218ms for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.199489   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.199494   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.264725   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.264759   77859 pod_ready.go:81] duration metric: took 65.256372ms for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.264774   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.264781   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.664064   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-proxy-cgdm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.664089   77859 pod_ready.go:81] duration metric: took 399.300184ms for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.664100   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-proxy-cgdm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.664109   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:10.044797   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:12.543553   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:15.064029   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.064059   77859 pod_ready.go:81] duration metric: took 399.939139ms for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:15.064074   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.064082   77859 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:15.464538   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.464564   77859 pod_ready.go:81] duration metric: took 400.472397ms for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:15.464584   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.464592   77859 pod_ready.go:38] duration metric: took 1.286083847s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:15.464609   77859 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:27:15.478197   77859 ops.go:34] apiserver oom_adj: -16
	I0729 18:27:15.478220   77859 kubeadm.go:597] duration metric: took 9.311779975s to restartPrimaryControlPlane
	I0729 18:27:15.478229   77859 kubeadm.go:394] duration metric: took 9.368934157s to StartCluster
	I0729 18:27:15.478247   77859 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:15.478311   77859 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:27:15.479920   77859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:15.480159   77859 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:27:15.480244   77859 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:27:15.480322   77859 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-502055"
	I0729 18:27:15.480355   77859 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-502055"
	I0729 18:27:15.480356   77859 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-502055"
	W0729 18:27:15.480368   77859 addons.go:243] addon storage-provisioner should already be in state true
	I0729 18:27:15.480371   77859 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-502055"
	I0729 18:27:15.480396   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.480397   77859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-502055"
	I0729 18:27:15.480402   77859 config.go:182] Loaded profile config "default-k8s-diff-port-502055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:27:15.480415   77859 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-502055"
	W0729 18:27:15.480426   77859 addons.go:243] addon metrics-server should already be in state true
	I0729 18:27:15.480460   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.480709   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480723   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480738   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.480738   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.480914   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480943   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.482004   77859 out.go:177] * Verifying Kubernetes components...
	I0729 18:27:15.483504   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:15.495748   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35469
	I0729 18:27:15.495965   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43251
	I0729 18:27:15.495977   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
	I0729 18:27:15.496147   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496324   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496433   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496604   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496622   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496760   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496778   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496914   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496930   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496982   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497086   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497219   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497644   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.497672   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.498076   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.498408   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.498449   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.501769   77859 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-502055"
	W0729 18:27:15.501790   77859 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:27:15.501814   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.502132   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.502163   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.516862   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0729 18:27:15.517070   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0729 18:27:15.517336   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.517525   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.517845   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.517877   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.518255   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.518356   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.518418   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.518657   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.518793   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.519009   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.520045   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I0729 18:27:15.520489   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.520613   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.520785   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.520962   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.520979   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.521295   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.521697   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.521712   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.522950   77859 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:15.522950   77859 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:27:15.524246   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:27:15.524268   77859 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:27:15.524291   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.524355   77859 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:27:15.524370   77859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:27:15.524388   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.527946   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528008   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528609   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.528645   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528678   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.528691   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528723   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.528939   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.528953   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.529101   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.529150   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.529218   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.529524   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.529716   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.539969   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41273
	I0729 18:27:15.540410   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.540887   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.540913   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.541351   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.541675   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.543494   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.543728   77859 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:27:15.543744   77859 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:27:15.543762   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.546809   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.547225   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.547250   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.547405   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.547595   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.547736   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.547859   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.662741   77859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:15.681179   77859 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-502055" to be "Ready" ...
	I0729 18:27:15.754691   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:27:15.767498   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:27:15.767515   77859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:27:15.781857   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:27:15.801619   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:27:15.801645   77859 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:27:15.823663   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:27:15.823690   77859 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:27:15.847827   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:27:16.818178   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.063432468s)
	I0729 18:27:16.818180   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.036288517s)
	I0729 18:27:16.818268   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818234   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818290   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818307   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818677   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.818680   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.818694   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.818710   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.818723   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818724   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.818735   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818740   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.818755   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818766   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818989   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.819000   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.819004   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.819017   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.819014   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.824028   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.824047   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.824268   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.824292   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.877321   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029455089s)
	I0729 18:27:16.877378   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.877393   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.877718   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.877767   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.877790   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.877801   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.878030   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.878047   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.878061   77859 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-502055"
	I0729 18:27:16.879704   77859 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 18:27:14.381238   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:14.381648   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:14.381677   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:14.381609   79098 retry.go:31] will retry after 4.005160817s: waiting for machine to come up
	I0729 18:27:16.880972   77859 addons.go:510] duration metric: took 1.400728317s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 18:27:17.685480   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:19.687853   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.042487   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:17.043250   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:19.045374   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:19.859418   77394 start.go:364] duration metric: took 54.906462088s to acquireMachinesLock for "no-preload-888056"
	I0729 18:27:19.859470   77394 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:27:19.859478   77394 fix.go:54] fixHost starting: 
	I0729 18:27:19.859850   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:19.859896   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:19.876798   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46323
	I0729 18:27:19.877254   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:19.877674   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:27:19.877709   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:19.878087   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:19.878257   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:19.878399   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:27:19.879875   77394 fix.go:112] recreateIfNeeded on no-preload-888056: state=Stopped err=<nil>
	I0729 18:27:19.879909   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	W0729 18:27:19.880054   77394 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:27:19.882098   77394 out.go:177] * Restarting existing kvm2 VM for "no-preload-888056" ...
	I0729 18:27:18.388470   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.388971   78080 main.go:141] libmachine: (old-k8s-version-386663) Found IP for machine: 192.168.50.70
	I0729 18:27:18.388989   78080 main.go:141] libmachine: (old-k8s-version-386663) Reserving static IP address...
	I0729 18:27:18.388999   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has current primary IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.389431   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "old-k8s-version-386663", mac: "52:54:00:78:b6:ac", ip: "192.168.50.70"} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.389459   78080 main.go:141] libmachine: (old-k8s-version-386663) Reserved static IP address: 192.168.50.70
	I0729 18:27:18.389477   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | skip adding static IP to network mk-old-k8s-version-386663 - found existing host DHCP lease matching {name: "old-k8s-version-386663", mac: "52:54:00:78:b6:ac", ip: "192.168.50.70"}
	I0729 18:27:18.389493   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Getting to WaitForSSH function...
	I0729 18:27:18.389515   78080 main.go:141] libmachine: (old-k8s-version-386663) Waiting for SSH to be available...
	I0729 18:27:18.391523   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.391916   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.391941   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.392062   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using SSH client type: external
	I0729 18:27:18.392088   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa (-rw-------)
	I0729 18:27:18.392119   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:27:18.392134   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | About to run SSH command:
	I0729 18:27:18.392150   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | exit 0
	I0729 18:27:18.514735   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | SSH cmd err, output: <nil>: 
	I0729 18:27:18.515114   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetConfigRaw
	I0729 18:27:18.515736   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:18.518194   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.518615   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.518651   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.518879   78080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json ...
	I0729 18:27:18.519090   78080 machine.go:94] provisionDockerMachine start ...
	I0729 18:27:18.519113   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:18.519322   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.521434   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.521824   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.521846   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.521996   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.522181   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.522349   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.522514   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.522724   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.522960   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.522975   78080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:27:18.622960   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:27:18.622989   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.623249   78080 buildroot.go:166] provisioning hostname "old-k8s-version-386663"
	I0729 18:27:18.623277   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.623461   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.626009   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.626376   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.626406   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.626649   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.626876   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.627141   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.627301   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.627474   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.627669   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.627683   78080 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386663 && echo "old-k8s-version-386663" | sudo tee /etc/hostname
	I0729 18:27:18.748137   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386663
	
	I0729 18:27:18.748165   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.751546   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.751882   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.751916   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.752086   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.752270   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.752409   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.752550   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.752747   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.753004   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.753031   78080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386663/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:27:18.863358   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:27:18.863389   78080 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:27:18.863415   78080 buildroot.go:174] setting up certificates
	I0729 18:27:18.863425   78080 provision.go:84] configureAuth start
	I0729 18:27:18.863436   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.863754   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:18.866285   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.866641   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.866668   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.866797   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.868886   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.869241   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.869270   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.869404   78080 provision.go:143] copyHostCerts
	I0729 18:27:18.869459   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:27:18.869468   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:27:18.869522   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:27:18.869614   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:27:18.869624   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:27:18.869652   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:27:18.869740   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:27:18.869750   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:27:18.869772   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:27:18.869833   78080 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386663 san=[127.0.0.1 192.168.50.70 localhost minikube old-k8s-version-386663]
	I0729 18:27:19.142743   78080 provision.go:177] copyRemoteCerts
	I0729 18:27:19.142808   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:27:19.142842   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.145484   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.145843   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.145872   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.146092   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.146334   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.146532   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.146692   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.230725   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:27:19.255862   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 18:27:19.290922   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:27:19.317519   78080 provision.go:87] duration metric: took 454.081583ms to configureAuth
	I0729 18:27:19.317549   78080 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:27:19.317766   78080 config.go:182] Loaded profile config "old-k8s-version-386663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:27:19.317854   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.320636   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.321074   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.321110   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.321346   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.321603   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.321782   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.321959   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.322158   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:19.322336   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:19.322351   78080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:27:19.626713   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:27:19.626737   78080 machine.go:97] duration metric: took 1.107631867s to provisionDockerMachine
	I0729 18:27:19.626749   78080 start.go:293] postStartSetup for "old-k8s-version-386663" (driver="kvm2")
	I0729 18:27:19.626763   78080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:27:19.626834   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.627168   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:27:19.627197   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.629389   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.629751   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.629782   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.629907   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.630102   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.630302   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.630460   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.709702   78080 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:27:19.713879   78080 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:27:19.713913   78080 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:27:19.713994   78080 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:27:19.714093   78080 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:27:19.714215   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:27:19.725226   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:19.751727   78080 start.go:296] duration metric: took 124.964072ms for postStartSetup
	I0729 18:27:19.751767   78080 fix.go:56] duration metric: took 19.951972224s for fixHost
	I0729 18:27:19.751796   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.754481   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.754843   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.754877   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.755107   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.755321   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.755482   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.755663   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.755829   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:19.756012   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:19.756024   78080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:27:19.859279   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277639.831700968
	
	I0729 18:27:19.859302   78080 fix.go:216] guest clock: 1722277639.831700968
	I0729 18:27:19.859309   78080 fix.go:229] Guest: 2024-07-29 18:27:19.831700968 +0000 UTC Remote: 2024-07-29 18:27:19.751770935 +0000 UTC m=+272.565043390 (delta=79.930033ms)
	I0729 18:27:19.859327   78080 fix.go:200] guest clock delta is within tolerance: 79.930033ms
	I0729 18:27:19.859332   78080 start.go:83] releasing machines lock for "old-k8s-version-386663", held for 20.059569122s
	I0729 18:27:19.859353   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.859661   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:19.862741   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.863225   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.863261   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.863449   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864092   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864309   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864392   78080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:27:19.864432   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.864547   78080 ssh_runner.go:195] Run: cat /version.json
	I0729 18:27:19.864572   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.867636   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.867798   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868019   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.868044   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868178   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.868330   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.868356   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868360   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.868500   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.868587   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.868667   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.868754   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.868910   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.869046   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.947441   78080 ssh_runner.go:195] Run: systemctl --version
	I0729 18:27:19.967868   78080 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:20.114336   78080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:20.121716   78080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:20.121793   78080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:20.143272   78080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:20.143298   78080 start.go:495] detecting cgroup driver to use...
	I0729 18:27:20.143385   78080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:20.162433   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:20.178310   78080 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:20.178397   78080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:20.194091   78080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:20.209796   78080 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:20.341466   78080 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:20.514215   78080 docker.go:233] disabling docker service ...
	I0729 18:27:20.514338   78080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:20.531018   78080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:20.551839   78080 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:20.680430   78080 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:20.834782   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:20.852454   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:20.874962   78080 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 18:27:20.875017   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.886550   78080 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:20.886619   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.899344   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.914254   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.927308   78080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:20.939807   78080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:20.951648   78080 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:20.951738   78080 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:20.967918   78080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:20.979872   78080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:21.125398   78080 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:21.290736   78080 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:21.290816   78080 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:21.296922   78080 start.go:563] Will wait 60s for crictl version
	I0729 18:27:21.296987   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:21.302200   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:21.350783   78080 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:21.350919   78080 ssh_runner.go:195] Run: crio --version
	I0729 18:27:21.391539   78080 ssh_runner.go:195] Run: crio --version
	I0729 18:27:21.441225   78080 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 18:27:21.442583   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:21.446238   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:21.446728   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:21.446756   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:21.446988   78080 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:21.452537   78080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:21.470394   78080 kubeadm.go:883] updating cluster {Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:21.470555   78080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:27:21.470610   78080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:21.531670   78080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:27:21.531742   78080 ssh_runner.go:195] Run: which lz4
	I0729 18:27:21.536436   78080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:27:21.542100   78080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:27:21.542139   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 18:27:19.883514   77394 main.go:141] libmachine: (no-preload-888056) Calling .Start
	I0729 18:27:19.883693   77394 main.go:141] libmachine: (no-preload-888056) Ensuring networks are active...
	I0729 18:27:19.884447   77394 main.go:141] libmachine: (no-preload-888056) Ensuring network default is active
	I0729 18:27:19.884847   77394 main.go:141] libmachine: (no-preload-888056) Ensuring network mk-no-preload-888056 is active
	I0729 18:27:19.885240   77394 main.go:141] libmachine: (no-preload-888056) Getting domain xml...
	I0729 18:27:19.886133   77394 main.go:141] libmachine: (no-preload-888056) Creating domain...
	I0729 18:27:21.226599   77394 main.go:141] libmachine: (no-preload-888056) Waiting to get IP...
	I0729 18:27:21.227673   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.228215   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.228278   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.228178   79288 retry.go:31] will retry after 290.676407ms: waiting for machine to come up
	I0729 18:27:21.520818   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.521458   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.521480   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.521360   79288 retry.go:31] will retry after 266.145355ms: waiting for machine to come up
	I0729 18:27:21.789603   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.790170   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.790200   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.790137   79288 retry.go:31] will retry after 464.137123ms: waiting for machine to come up
	I0729 18:27:22.255586   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:22.256159   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:22.256184   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:22.256098   79288 retry.go:31] will retry after 562.330595ms: waiting for machine to come up
	I0729 18:27:21.691280   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:23.188725   77859 node_ready.go:49] node "default-k8s-diff-port-502055" has status "Ready":"True"
	I0729 18:27:23.188758   77859 node_ready.go:38] duration metric: took 7.507549954s for node "default-k8s-diff-port-502055" to be "Ready" ...
	I0729 18:27:23.188772   77859 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:23.197714   77859 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.204037   77859 pod_ready.go:92] pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:23.204065   77859 pod_ready.go:81] duration metric: took 6.32123ms for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.204086   77859 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.211765   77859 pod_ready.go:92] pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:23.211791   77859 pod_ready.go:81] duration metric: took 7.69614ms for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.211803   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:21.544757   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:24.043649   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:23.329902   78080 crio.go:462] duration metric: took 1.793505279s to copy over tarball
	I0729 18:27:23.329979   78080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:27:26.453768   78080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.123735537s)
	I0729 18:27:26.453800   78080 crio.go:469] duration metric: took 3.123869338s to extract the tarball
	I0729 18:27:26.453809   78080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:27:26.501748   78080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:26.538093   78080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:27:26.538124   78080 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:27:26.538226   78080 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.538297   78080 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 18:27:26.538387   78080 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.538232   78080 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.538441   78080 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.538303   78080 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.538277   78080 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.538783   78080 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.540806   78080 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.540823   78080 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.540847   78080 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.540858   78080 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 18:27:26.540806   78080 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.540894   78080 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.540937   78080 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.540987   78080 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.700993   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.704402   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.712647   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.714034   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.715935   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.753888   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.758588   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 18:27:26.837981   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.844473   78080 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 18:27:26.844532   78080 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.844578   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.877082   78080 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 18:27:26.877134   78080 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.877183   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.889792   78080 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 18:27:26.889887   78080 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.889842   78080 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 18:27:26.889944   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.889983   78080 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.890034   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.916338   78080 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 18:27:26.916388   78080 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.916440   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.916437   78080 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 18:27:26.916540   78080 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.916581   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.942747   78080 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 18:27:26.942794   78080 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 18:27:26.942839   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:27.056976   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:27.056976   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 18:27:27.057045   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:27.057071   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:27.057101   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:27.057152   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:27.057178   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 18:27:27.219396   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 18:27:22.820490   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:22.820969   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:22.820993   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:22.820906   79288 retry.go:31] will retry after 728.452145ms: waiting for machine to come up
	I0729 18:27:23.550655   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:23.551337   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:23.551361   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:23.551287   79288 retry.go:31] will retry after 782.583051ms: waiting for machine to come up
	I0729 18:27:24.335785   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:24.336257   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:24.336310   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:24.336235   79288 retry.go:31] will retry after 1.040109521s: waiting for machine to come up
	I0729 18:27:25.377676   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:25.378187   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:25.378231   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:25.378153   79288 retry.go:31] will retry after 1.276093038s: waiting for machine to come up
	I0729 18:27:26.655479   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:26.655922   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:26.655950   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:26.655872   79288 retry.go:31] will retry after 1.267687539s: waiting for machine to come up
	I0729 18:27:25.219175   77859 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.225735   77859 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.718741   77859 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.718772   77859 pod_ready.go:81] duration metric: took 4.506959705s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.718786   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.723687   77859 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.723709   77859 pod_ready.go:81] duration metric: took 4.915901ms for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.723720   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.728504   77859 pod_ready.go:92] pod "kube-proxy-cgdm8" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.728526   77859 pod_ready.go:81] duration metric: took 4.797185ms for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.728538   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.733036   77859 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.733061   77859 pod_ready.go:81] duration metric: took 4.514471ms for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.733073   77859 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:29.739966   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:26.044607   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:28.543664   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.219541   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 18:27:27.223329   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 18:27:27.223406   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 18:27:27.223450   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 18:27:27.223492   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 18:27:27.223536   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 18:27:27.223567   78080 cache_images.go:92] duration metric: took 685.427642ms to LoadCachedImages
	W0729 18:27:27.223653   78080 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0729 18:27:27.223672   78080 kubeadm.go:934] updating node { 192.168.50.70 8443 v1.20.0 crio true true} ...
	I0729 18:27:27.223785   78080 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386663 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:27.223866   78080 ssh_runner.go:195] Run: crio config
	I0729 18:27:27.273186   78080 cni.go:84] Creating CNI manager for ""
	I0729 18:27:27.273207   78080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:27.273217   78080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:27.273241   78080 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.70 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386663 NodeName:old-k8s-version-386663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 18:27:27.273424   78080 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386663"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:27.273498   78080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 18:27:27.285247   78080 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:27.285327   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:27.295747   78080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 18:27:27.314192   78080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:27:27.331654   78080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 18:27:27.351717   78080 ssh_runner.go:195] Run: grep 192.168.50.70	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:27.356205   78080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:27.370446   78080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:27.509250   78080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:27.528776   78080 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663 for IP: 192.168.50.70
	I0729 18:27:27.528804   78080 certs.go:194] generating shared ca certs ...
	I0729 18:27:27.528823   78080 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:27.528991   78080 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:27.529045   78080 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:27.529061   78080 certs.go:256] generating profile certs ...
	I0729 18:27:27.529194   78080 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/client.key
	I0729 18:27:27.529308   78080 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key.71ea3f9f
	I0729 18:27:27.529364   78080 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key
	I0729 18:27:27.529529   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:27.529569   78080 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:27.529584   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:27.529614   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:27.529645   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:27.529689   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:27.529751   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:27.530573   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:27.582122   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:27.626846   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:27.663609   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:27.700294   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 18:27:27.746614   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:27.785212   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:27.834479   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:27:27.866939   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:27.892613   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:27.919059   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:27.947557   78080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:27.968625   78080 ssh_runner.go:195] Run: openssl version
	I0729 18:27:27.976500   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:27.991016   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:27.996228   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:27.996285   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:28.002529   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:28.013844   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:28.025388   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.029982   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.030042   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.036362   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:28.050134   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:28.062742   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.067240   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.067293   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.072973   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:28.084143   78080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:28.089526   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:28.096556   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:28.103044   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:28.109337   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:28.115455   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:28.121449   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:28.127395   78080 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:28.127504   78080 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:28.127581   78080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:28.176772   78080 cri.go:89] found id: ""
	I0729 18:27:28.176837   78080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:28.187955   78080 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:28.187979   78080 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:28.188034   78080 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:28.197926   78080 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:28.199364   78080 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-386663" does not appear in /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:27:28.200382   78080 kubeconfig.go:62] /home/jenkins/minikube-integration/19345-11206/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-386663" cluster setting kubeconfig missing "old-k8s-version-386663" context setting]
	I0729 18:27:28.201737   78080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:28.287712   78080 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:28.300675   78080 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.70
	I0729 18:27:28.300716   78080 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:28.300728   78080 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:28.300795   78080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:28.343880   78080 cri.go:89] found id: ""
	I0729 18:27:28.343962   78080 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:28.362391   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:28.372805   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:28.372830   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:28.372882   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:27:28.383540   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:28.383629   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:28.396564   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:27:28.409151   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:28.409208   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:28.422243   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:27:28.434736   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:28.434839   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:28.447681   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:27:28.460008   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:28.460073   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:28.472647   78080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:28.484179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:28.634526   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.206575   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.449626   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.550859   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.681945   78080 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:29.682015   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:30.182098   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:30.682977   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:31.182152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:31.682468   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:32.183031   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:27.924957   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:27.925430   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:27.925461   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:27.925378   79288 retry.go:31] will retry after 1.455979038s: waiting for machine to come up
	I0729 18:27:29.383257   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:29.383769   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:29.383793   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:29.383722   79288 retry.go:31] will retry after 1.862834258s: waiting for machine to come up
	I0729 18:27:31.248806   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:31.249394   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:31.249414   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:31.249344   79288 retry.go:31] will retry after 3.203097967s: waiting for machine to come up
	I0729 18:27:32.242350   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:34.738663   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:31.043735   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:33.543152   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:32.682567   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:33.182100   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:33.682494   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.183075   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.683115   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:35.183094   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:35.683092   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:36.182173   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:36.682843   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:37.182324   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.453552   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:34.453906   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:34.453930   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:34.453852   79288 retry.go:31] will retry after 3.166208105s: waiting for machine to come up
	I0729 18:27:36.739239   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:38.740812   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:35.543428   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:38.042603   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:37.622330   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.622738   77394 main.go:141] libmachine: (no-preload-888056) Found IP for machine: 192.168.72.80
	I0729 18:27:37.622767   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has current primary IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.622779   77394 main.go:141] libmachine: (no-preload-888056) Reserving static IP address...
	I0729 18:27:37.623108   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "no-preload-888056", mac: "52:54:00:b2:b0:1a", ip: "192.168.72.80"} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.623144   77394 main.go:141] libmachine: (no-preload-888056) DBG | skip adding static IP to network mk-no-preload-888056 - found existing host DHCP lease matching {name: "no-preload-888056", mac: "52:54:00:b2:b0:1a", ip: "192.168.72.80"}
	I0729 18:27:37.623160   77394 main.go:141] libmachine: (no-preload-888056) Reserved static IP address: 192.168.72.80
	I0729 18:27:37.623174   77394 main.go:141] libmachine: (no-preload-888056) Waiting for SSH to be available...
	I0729 18:27:37.623183   77394 main.go:141] libmachine: (no-preload-888056) DBG | Getting to WaitForSSH function...
	I0729 18:27:37.625391   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.625732   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.625759   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.625927   77394 main.go:141] libmachine: (no-preload-888056) DBG | Using SSH client type: external
	I0729 18:27:37.625948   77394 main.go:141] libmachine: (no-preload-888056) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa (-rw-------)
	I0729 18:27:37.625994   77394 main.go:141] libmachine: (no-preload-888056) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:27:37.626008   77394 main.go:141] libmachine: (no-preload-888056) DBG | About to run SSH command:
	I0729 18:27:37.626020   77394 main.go:141] libmachine: (no-preload-888056) DBG | exit 0
	I0729 18:27:37.750587   77394 main.go:141] libmachine: (no-preload-888056) DBG | SSH cmd err, output: <nil>: 
	I0729 18:27:37.750986   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetConfigRaw
	I0729 18:27:37.751717   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:37.754387   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.754753   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.754781   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.754995   77394 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/config.json ...
	I0729 18:27:37.755184   77394 machine.go:94] provisionDockerMachine start ...
	I0729 18:27:37.755207   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:37.755397   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.757649   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.757965   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.757988   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.758128   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.758297   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.758463   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.758599   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.758754   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.758918   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.758927   77394 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:27:37.862940   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:27:37.862976   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:37.863205   77394 buildroot.go:166] provisioning hostname "no-preload-888056"
	I0729 18:27:37.863234   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:37.863425   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.866190   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.866538   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.866565   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.866705   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.866878   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.867046   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.867166   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.867307   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.867478   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.867490   77394 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-888056 && echo "no-preload-888056" | sudo tee /etc/hostname
	I0729 18:27:37.985031   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-888056
	
	I0729 18:27:37.985070   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.987577   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.987917   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.987945   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.988126   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.988311   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.988469   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.988601   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.988786   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.988994   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.989012   77394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-888056' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-888056/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-888056' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:27:38.103831   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:27:38.103853   77394 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:27:38.103870   77394 buildroot.go:174] setting up certificates
	I0729 18:27:38.103878   77394 provision.go:84] configureAuth start
	I0729 18:27:38.103886   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:38.104166   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:38.107080   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.107493   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.107521   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.107690   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.110087   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.110495   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.110520   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.110738   77394 provision.go:143] copyHostCerts
	I0729 18:27:38.110793   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:27:38.110802   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:27:38.110853   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:27:38.110968   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:27:38.110978   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:27:38.110998   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:27:38.111056   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:27:38.111063   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:27:38.111080   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:27:38.111149   77394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.no-preload-888056 san=[127.0.0.1 192.168.72.80 localhost minikube no-preload-888056]
	I0729 18:27:38.327305   77394 provision.go:177] copyRemoteCerts
	I0729 18:27:38.327378   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:27:38.327407   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.330008   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.330304   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.330327   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.330516   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.330739   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.330908   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.331071   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:38.414678   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:27:38.443418   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:27:38.469248   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 18:27:38.494014   77394 provision.go:87] duration metric: took 390.106553ms to configureAuth
	I0729 18:27:38.494049   77394 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:27:38.494245   77394 config.go:182] Loaded profile config "no-preload-888056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:27:38.494357   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.497162   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.497586   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.497620   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.497946   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.498137   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.498328   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.498566   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.498766   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:38.498940   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:38.498955   77394 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:27:38.762438   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:27:38.762462   77394 machine.go:97] duration metric: took 1.007266999s to provisionDockerMachine
	I0729 18:27:38.762473   77394 start.go:293] postStartSetup for "no-preload-888056" (driver="kvm2")
	I0729 18:27:38.762484   77394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:27:38.762511   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:38.762797   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:27:38.762832   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.765677   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.766031   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.766054   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.766222   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.766432   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.766621   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.766774   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:38.854492   77394 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:27:38.858934   77394 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:27:38.858962   77394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:27:38.859041   77394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:27:38.859136   77394 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:27:38.859251   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:27:38.869459   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:38.894422   77394 start.go:296] duration metric: took 131.935433ms for postStartSetup
	I0729 18:27:38.894466   77394 fix.go:56] duration metric: took 19.034987866s for fixHost
	I0729 18:27:38.894492   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.897266   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.897654   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.897684   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.897890   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.898102   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.898250   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.898356   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.898547   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:38.898721   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:38.898732   77394 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:27:39.003526   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277658.970659996
	
	I0729 18:27:39.003571   77394 fix.go:216] guest clock: 1722277658.970659996
	I0729 18:27:39.003581   77394 fix.go:229] Guest: 2024-07-29 18:27:38.970659996 +0000 UTC Remote: 2024-07-29 18:27:38.8944731 +0000 UTC m=+356.533366653 (delta=76.186896ms)
	I0729 18:27:39.003600   77394 fix.go:200] guest clock delta is within tolerance: 76.186896ms
	I0729 18:27:39.003605   77394 start.go:83] releasing machines lock for "no-preload-888056", held for 19.144159359s
	I0729 18:27:39.003622   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.003881   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:39.006550   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.006850   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.006886   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.007005   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007597   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007779   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007879   77394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:27:39.007939   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:39.008001   77394 ssh_runner.go:195] Run: cat /version.json
	I0729 18:27:39.008026   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:39.010634   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.010941   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.010965   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.010984   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.011257   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:39.011442   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.011474   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.011487   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:39.011632   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:39.011678   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:39.011782   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:39.011951   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:39.011985   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:39.012094   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:39.114446   77394 ssh_runner.go:195] Run: systemctl --version
	I0729 18:27:39.120848   77394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:39.266976   77394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:39.273603   77394 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:39.273670   77394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:39.295511   77394 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:39.295533   77394 start.go:495] detecting cgroup driver to use...
	I0729 18:27:39.295593   77394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:39.313692   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:39.328435   77394 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:39.328502   77394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:39.342580   77394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:39.356694   77394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:39.474555   77394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:39.632766   77394 docker.go:233] disabling docker service ...
	I0729 18:27:39.632827   77394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:39.648961   77394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:39.663277   77394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:39.813329   77394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:39.944017   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:39.957624   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:39.976348   77394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 18:27:39.976401   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:39.986672   77394 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:39.986735   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:39.996867   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.007547   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.018141   77394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:40.029258   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.040007   77394 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.057611   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.068107   77394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:40.077798   77394 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:40.077877   77394 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:40.091040   77394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:40.100846   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:40.227049   77394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:40.368213   77394 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:40.368295   77394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:40.374168   77394 start.go:563] Will wait 60s for crictl version
	I0729 18:27:40.374239   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.378268   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:40.422500   77394 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:40.422579   77394 ssh_runner.go:195] Run: crio --version
	I0729 18:27:40.451170   77394 ssh_runner.go:195] Run: crio --version
	I0729 18:27:40.481789   77394 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 18:27:37.682180   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:38.182453   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:38.682639   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:39.182874   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:39.682496   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.182727   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.683073   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:41.182060   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:41.682421   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:42.182813   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.483209   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:40.486303   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:40.486738   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:40.486768   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:40.487032   77394 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:40.491318   77394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:40.505196   77394 kubeadm.go:883] updating cluster {Name:no-preload-888056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:40.505303   77394 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 18:27:40.505333   77394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:40.541356   77394 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 18:27:40.541380   77394 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:27:40.541445   77394 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.541452   77394 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.541465   77394 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.541495   77394 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.541503   77394 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.541527   77394 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.541583   77394 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.542060   77394 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 18:27:40.543507   77394 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.543519   77394 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.543505   77394 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 18:27:40.543535   77394 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.543504   77394 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.543761   77394 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.543799   77394 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.543999   77394 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.693026   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.709057   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.715664   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.720337   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.746126   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 18:27:40.748805   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.759200   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.768613   77394 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 18:27:40.768659   77394 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.768705   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.812940   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.852143   77394 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 18:27:40.852173   77394 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 18:27:40.852191   77394 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.852206   77394 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.852237   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.852249   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.890477   77394 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 18:27:40.890521   77394 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.890566   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991390   77394 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 18:27:40.991435   77394 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.991462   77394 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 18:27:40.991486   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991501   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.991508   77394 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.991548   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991556   77394 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 18:27:40.991579   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.991595   77394 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.991609   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.991654   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991694   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:41.087626   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:41.087736   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:41.087742   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:41.087782   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:41.087819   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 18:27:41.087830   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:41.087883   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.091774   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 18:27:41.091828   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:41.091858   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:41.091873   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:41.104679   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 18:27:41.104702   77394 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.104733   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 18:27:41.104750   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.155992   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 18:27:41.156114   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 18:27:41.156227   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:41.169410   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 18:27:41.169535   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:41.176103   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:41.176116   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 18:27:41.176214   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:41.241044   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:43.739887   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:40.543004   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:43.044338   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:42.682911   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:43.182279   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:43.682506   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.182109   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.682593   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:45.183002   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:45.682275   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:46.182491   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:46.683027   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:47.182311   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.874768   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.769989933s)
	I0729 18:27:44.874798   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 18:27:44.874827   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:44.874861   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (3.71860957s)
	I0729 18:27:44.874894   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 18:27:44.874906   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:44.874930   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.705380577s)
	I0729 18:27:44.874947   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 18:27:44.874972   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (3.698734733s)
	I0729 18:27:44.875001   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 18:27:46.333065   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.458135446s)
	I0729 18:27:46.333109   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 18:27:46.333137   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:46.333175   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:45.739935   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.740654   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:45.542272   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.543683   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.682979   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.183024   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.682708   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:49.182427   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:49.682335   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:50.182146   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:50.682716   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:51.182231   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:51.683106   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:52.182739   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.194389   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.861190748s)
	I0729 18:27:48.194419   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 18:27:48.194443   77394 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:48.194483   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:50.159353   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.964849018s)
	I0729 18:27:50.159384   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 18:27:50.159427   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:50.159494   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:52.256998   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.097482067s)
	I0729 18:27:52.257038   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 18:27:52.257075   77394 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:52.257125   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:50.239878   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.740167   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:50.042299   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.042567   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:54.043462   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.682628   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:53.182081   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:53.682919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:54.183194   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:54.682506   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:55.182992   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:55.682152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:56.183083   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:56.682897   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.182789   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:52.899503   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 18:27:52.899539   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:52.899594   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:54.868011   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.968389841s)
	I0729 18:27:54.868043   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 18:27:54.868075   77394 cache_images.go:123] Successfully loaded all cached images
	I0729 18:27:54.868080   77394 cache_images.go:92] duration metric: took 14.326689217s to LoadCachedImages
	I0729 18:27:54.868088   77394 kubeadm.go:934] updating node { 192.168.72.80 8443 v1.31.0-beta.0 crio true true} ...
	I0729 18:27:54.868226   77394 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-888056 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:54.868305   77394 ssh_runner.go:195] Run: crio config
	I0729 18:27:54.928569   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:27:54.928591   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:54.928604   77394 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:54.928633   77394 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.80 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-888056 NodeName:no-preload-888056 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:27:54.928800   77394 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-888056"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:54.928871   77394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 18:27:54.939479   77394 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:54.939534   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:54.948928   77394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0729 18:27:54.966700   77394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 18:27:54.984218   77394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0729 18:27:55.000813   77394 ssh_runner.go:195] Run: grep 192.168.72.80	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:55.004529   77394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:55.016140   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:55.141053   77394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:55.158874   77394 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056 for IP: 192.168.72.80
	I0729 18:27:55.158897   77394 certs.go:194] generating shared ca certs ...
	I0729 18:27:55.158918   77394 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:55.159074   77394 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:55.159136   77394 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:55.159150   77394 certs.go:256] generating profile certs ...
	I0729 18:27:55.159245   77394 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/client.key
	I0729 18:27:55.159320   77394 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.key.f09a151f
	I0729 18:27:55.159373   77394 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.key
	I0729 18:27:55.159511   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:55.159552   77394 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:55.159566   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:55.159600   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:55.159641   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:55.159680   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:55.159734   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:55.160575   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:55.211823   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:55.248637   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:55.287972   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:55.317920   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 18:27:55.346034   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:55.377569   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:55.402593   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 18:27:55.427969   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:55.452060   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:55.476635   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:55.500831   77394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:55.518744   77394 ssh_runner.go:195] Run: openssl version
	I0729 18:27:55.524865   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:55.536601   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.541752   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.541807   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.548070   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:55.559866   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:55.571833   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.576304   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.576342   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.582204   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:55.594531   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:55.605773   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.610585   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.610633   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.616478   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:55.628160   77394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:55.632691   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:55.638793   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:55.644678   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:55.651117   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:55.657397   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:55.663351   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:55.670080   77394 kubeadm.go:392] StartCluster: {Name:no-preload-888056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:55.670183   77394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:55.670248   77394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:55.712280   77394 cri.go:89] found id: ""
	I0729 18:27:55.712343   77394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:55.722878   77394 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:55.722898   77394 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:55.722935   77394 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:55.732704   77394 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:55.733646   77394 kubeconfig.go:125] found "no-preload-888056" server: "https://192.168.72.80:8443"
	I0729 18:27:55.736512   77394 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:55.748360   77394 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.80
	I0729 18:27:55.748403   77394 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:55.748416   77394 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:55.748464   77394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:55.789773   77394 cri.go:89] found id: ""
	I0729 18:27:55.789854   77394 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:55.808905   77394 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:55.819969   77394 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:55.819991   77394 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:55.820064   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:27:55.829392   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:55.829445   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:55.838934   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:27:55.848659   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:55.848720   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:55.859490   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:27:55.870024   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:55.870076   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:55.881599   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:27:55.891805   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:55.891869   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:55.901750   77394 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:55.911525   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:56.021031   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.075545   77394 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.054482988s)
	I0729 18:27:57.075571   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.302701   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.382837   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:55.261397   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:57.738688   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:59.739828   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:56.543870   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:59.043285   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:57.682237   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.182211   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.682456   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:59.182669   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:59.682863   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:00.182261   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:00.682993   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:01.182832   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:01.682899   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:02.182765   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.492480   77394 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:57.492580   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.993240   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.492965   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.517442   77394 api_server.go:72] duration metric: took 1.024961129s to wait for apiserver process to appear ...
	I0729 18:27:58.517479   77394 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:27:58.517505   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:27:58.518046   77394 api_server.go:269] stopped: https://192.168.72.80:8443/healthz: Get "https://192.168.72.80:8443/healthz": dial tcp 192.168.72.80:8443: connect: connection refused
	I0729 18:27:59.017614   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.088238   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:28:02.088265   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:28:02.088277   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.147855   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:28:02.147882   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:28:02.518439   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.525213   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:28:02.525247   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:28:03.018275   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:03.024993   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:28:03.025023   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:28:03.517564   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:03.523409   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 200:
	ok
	I0729 18:28:03.529656   77394 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 18:28:03.529687   77394 api_server.go:131] duration metric: took 5.01219984s to wait for apiserver health ...
	I0729 18:28:03.529698   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:28:03.529706   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:28:03.531527   77394 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:28:01.740935   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:03.743806   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:01.043882   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:03.542540   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:02.682331   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.182154   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.682499   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:04.182355   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:04.682338   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:05.182107   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:05.683125   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:06.182481   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:06.683153   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:07.182992   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.532788   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:28:03.544878   77394 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:28:03.586100   77394 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:28:03.604975   77394 system_pods.go:59] 8 kube-system pods found
	I0729 18:28:03.605012   77394 system_pods.go:61] "coredns-5cfdc65f69-bg5j4" [7a26ffbb-014c-4cf7-b302-214cf78374bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:28:03.605022   77394 system_pods.go:61] "etcd-no-preload-888056" [d76f2eb7-67d9-4ba0-8d2f-acfc78559651] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:28:03.605036   77394 system_pods.go:61] "kube-apiserver-no-preload-888056" [1dbea0ee-58be-47ca-b4ab-94065413768d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:28:03.605044   77394 system_pods.go:61] "kube-controller-manager-no-preload-888056" [fb8ce9d9-2953-4b91-8734-87bd38a63eb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:28:03.605051   77394 system_pods.go:61] "kube-proxy-w5z2f" [2425da76-cf2d-41c9-b8db-1370ab5333c5] Running
	I0729 18:28:03.605059   77394 system_pods.go:61] "kube-scheduler-no-preload-888056" [9958567f-116d-4094-9e7e-6208f7358486] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:28:03.605066   77394 system_pods.go:61] "metrics-server-78fcd8795b-jcdcw" [c506a5f8-d569-4c3d-9b6e-21b9fc63a86a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:28:03.605073   77394 system_pods.go:61] "storage-provisioner" [ccbc4fa6-1237-46ca-ac80-34972b9a43df] Running
	I0729 18:28:03.605082   77394 system_pods.go:74] duration metric: took 18.959807ms to wait for pod list to return data ...
	I0729 18:28:03.605095   77394 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:28:03.609225   77394 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:28:03.609249   77394 node_conditions.go:123] node cpu capacity is 2
	I0729 18:28:03.609261   77394 node_conditions.go:105] duration metric: took 4.16099ms to run NodePressure ...
	I0729 18:28:03.609278   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:28:03.881440   77394 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:28:03.886401   77394 kubeadm.go:739] kubelet initialised
	I0729 18:28:03.886429   77394 kubeadm.go:740] duration metric: took 4.958282ms waiting for restarted kubelet to initialise ...
	I0729 18:28:03.886440   77394 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:28:03.891373   77394 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:05.900595   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:06.239029   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:08.240309   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:06.042541   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:08.043322   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:07.682582   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.182094   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.682613   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:09.182936   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:09.682444   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:10.182354   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:10.682183   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:11.182502   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:11.682466   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:12.182113   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.397084   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.399546   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.897981   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:10.898006   77394 pod_ready.go:81] duration metric: took 7.006606905s for pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.898014   77394 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.903064   77394 pod_ready.go:92] pod "etcd-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:10.903088   77394 pod_ready.go:81] duration metric: took 5.066249ms for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.903099   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:11.409319   77394 pod_ready.go:92] pod "kube-apiserver-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:11.409344   77394 pod_ready.go:81] duration metric: took 506.238678ms for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:11.409353   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.250001   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:12.741099   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.542146   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:13.042422   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:12.682526   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.183014   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.682449   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:14.182138   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:14.683065   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:15.182838   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:15.682680   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:16.182714   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:16.682116   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:17.182842   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.415469   77394 pod_ready.go:102] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:13.917111   77394 pod_ready.go:92] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.917134   77394 pod_ready.go:81] duration metric: took 2.507774546s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.917149   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w5z2f" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.922045   77394 pod_ready.go:92] pod "kube-proxy-w5z2f" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.922069   77394 pod_ready.go:81] duration metric: took 4.912892ms for pod "kube-proxy-w5z2f" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.922080   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.927633   77394 pod_ready.go:92] pod "kube-scheduler-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.927654   77394 pod_ready.go:81] duration metric: took 5.565409ms for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.927666   77394 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:15.934081   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:15.240105   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.740031   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:19.740077   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:15.042540   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.043335   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:19.542061   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.683114   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:18.182919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:18.683103   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:19.182074   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:19.683031   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:20.182701   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:20.682749   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:21.182949   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:21.683001   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:22.182167   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:17.935797   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:20.434416   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:21.740735   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.238828   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:21.544060   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.042058   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:22.682723   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:23.182510   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:23.683084   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:24.182220   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:24.682699   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:25.182288   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:25.682433   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:26.182919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:26.682851   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:27.182225   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:22.435465   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.935088   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:26.239694   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:28.240174   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:26.542381   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:29.043706   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:27.682408   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:28.182187   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:28.683034   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:29.182922   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:29.682990   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:29.683063   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:29.730368   78080 cri.go:89] found id: ""
	I0729 18:28:29.730405   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.730413   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:29.730419   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:29.730473   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:29.770368   78080 cri.go:89] found id: ""
	I0729 18:28:29.770398   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.770409   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:29.770426   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:29.770479   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:29.809873   78080 cri.go:89] found id: ""
	I0729 18:28:29.809898   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.809906   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:29.809911   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:29.809970   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:29.848980   78080 cri.go:89] found id: ""
	I0729 18:28:29.849006   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.849016   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:29.849023   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:29.849082   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:29.887261   78080 cri.go:89] found id: ""
	I0729 18:28:29.887292   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.887302   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:29.887311   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:29.887388   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:29.927011   78080 cri.go:89] found id: ""
	I0729 18:28:29.927041   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.927051   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:29.927058   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:29.927122   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:29.965577   78080 cri.go:89] found id: ""
	I0729 18:28:29.965609   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.965619   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:29.965625   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:29.965693   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:29.999180   78080 cri.go:89] found id: ""
	I0729 18:28:29.999210   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.999222   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:29.999233   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:29.999253   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:30.049401   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:30.049433   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:30.063903   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:30.063939   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:30.194776   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:30.194797   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:30.194812   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:30.261861   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:30.261906   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:27.434837   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:29.435257   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:31.435297   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:30.738940   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:32.740748   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:31.542494   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:33.542872   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:32.801821   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:32.814741   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:32.814815   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:32.853490   78080 cri.go:89] found id: ""
	I0729 18:28:32.853514   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.853522   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:32.853530   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:32.853580   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:32.890314   78080 cri.go:89] found id: ""
	I0729 18:28:32.890339   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.890349   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:32.890356   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:32.890435   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:32.928231   78080 cri.go:89] found id: ""
	I0729 18:28:32.928255   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.928262   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:32.928268   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:32.928314   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:32.964024   78080 cri.go:89] found id: ""
	I0729 18:28:32.964054   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.964065   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:32.964072   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:32.964136   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:33.002099   78080 cri.go:89] found id: ""
	I0729 18:28:33.002127   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.002140   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:33.002146   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:33.002195   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:33.042238   78080 cri.go:89] found id: ""
	I0729 18:28:33.042265   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.042273   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:33.042278   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:33.042331   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:33.078715   78080 cri.go:89] found id: ""
	I0729 18:28:33.078741   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.078750   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:33.078756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:33.078816   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:33.123304   78080 cri.go:89] found id: ""
	I0729 18:28:33.123334   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.123342   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:33.123351   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:33.123366   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:33.198950   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:33.198994   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:33.223566   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:33.223594   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:33.306500   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:33.306526   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:33.306541   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:33.379386   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:33.379421   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:35.926834   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:35.942218   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:35.942296   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:35.980115   78080 cri.go:89] found id: ""
	I0729 18:28:35.980142   78080 logs.go:276] 0 containers: []
	W0729 18:28:35.980153   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:35.980159   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:35.980221   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:36.015354   78080 cri.go:89] found id: ""
	I0729 18:28:36.015379   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.015387   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:36.015392   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:36.015456   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:36.056411   78080 cri.go:89] found id: ""
	I0729 18:28:36.056435   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.056445   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:36.056451   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:36.056499   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:36.099153   78080 cri.go:89] found id: ""
	I0729 18:28:36.099180   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.099188   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:36.099193   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:36.099241   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:36.133427   78080 cri.go:89] found id: ""
	I0729 18:28:36.133459   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.133470   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:36.133477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:36.133544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:36.168619   78080 cri.go:89] found id: ""
	I0729 18:28:36.168646   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.168657   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:36.168664   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:36.168723   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:36.203636   78080 cri.go:89] found id: ""
	I0729 18:28:36.203666   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.203676   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:36.203684   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:36.203747   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:36.246495   78080 cri.go:89] found id: ""
	I0729 18:28:36.246523   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.246533   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:36.246544   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:36.246561   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:36.260630   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:36.260656   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:36.337406   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:36.337424   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:36.337435   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:36.410016   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:36.410049   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:36.453458   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:36.453492   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:33.435859   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.934955   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.240070   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:37.739406   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.740035   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.543153   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:37.543467   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.543573   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.004147   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:39.018217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:39.018279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:39.054130   78080 cri.go:89] found id: ""
	I0729 18:28:39.054155   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.054166   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:39.054172   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:39.054219   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:39.090458   78080 cri.go:89] found id: ""
	I0729 18:28:39.090482   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.090490   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:39.090501   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:39.090548   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:39.126933   78080 cri.go:89] found id: ""
	I0729 18:28:39.126960   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.126971   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:39.126978   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:39.127042   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:39.162324   78080 cri.go:89] found id: ""
	I0729 18:28:39.162352   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.162381   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:39.162389   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:39.162450   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:39.202440   78080 cri.go:89] found id: ""
	I0729 18:28:39.202464   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.202471   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:39.202477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:39.202537   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:39.238314   78080 cri.go:89] found id: ""
	I0729 18:28:39.238342   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.238352   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:39.238368   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:39.238436   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:39.275545   78080 cri.go:89] found id: ""
	I0729 18:28:39.275584   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.275592   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:39.275598   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:39.275663   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:39.311575   78080 cri.go:89] found id: ""
	I0729 18:28:39.311603   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.311614   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:39.311624   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:39.311643   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:39.367667   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:39.367711   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:39.381823   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:39.381852   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:39.456060   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:39.456083   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:39.456100   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:39.531747   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:39.531784   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:42.077771   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:42.092424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:42.092512   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:42.128710   78080 cri.go:89] found id: ""
	I0729 18:28:42.128744   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.128756   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:42.128765   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:42.128834   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:42.166092   78080 cri.go:89] found id: ""
	I0729 18:28:42.166126   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.166133   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:42.166138   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:42.166186   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:42.200955   78080 cri.go:89] found id: ""
	I0729 18:28:42.200981   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.200989   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:42.200994   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:42.201053   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:38.435476   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:40.935166   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:42.240354   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:44.739322   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:41.543640   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:43.543781   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:42.240176   78080 cri.go:89] found id: ""
	I0729 18:28:42.240203   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.240212   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:42.240219   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:42.240279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:42.279844   78080 cri.go:89] found id: ""
	I0729 18:28:42.279872   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.279880   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:42.279885   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:42.279946   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:42.313071   78080 cri.go:89] found id: ""
	I0729 18:28:42.313099   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.313108   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:42.313114   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:42.313187   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:42.348540   78080 cri.go:89] found id: ""
	I0729 18:28:42.348566   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.348573   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:42.348580   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:42.348630   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:42.384688   78080 cri.go:89] found id: ""
	I0729 18:28:42.384714   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.384725   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:42.384736   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:42.384750   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:42.399178   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:42.399206   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:42.472903   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:42.472921   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:42.472937   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:42.558541   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:42.558573   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:42.599403   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:42.599432   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:45.154026   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:45.167130   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:45.167200   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:45.203627   78080 cri.go:89] found id: ""
	I0729 18:28:45.203654   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.203663   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:45.203668   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:45.203714   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:45.242293   78080 cri.go:89] found id: ""
	I0729 18:28:45.242316   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.242325   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:45.242332   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:45.242403   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:45.282253   78080 cri.go:89] found id: ""
	I0729 18:28:45.282275   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.282282   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:45.282288   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:45.282335   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:45.320151   78080 cri.go:89] found id: ""
	I0729 18:28:45.320175   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.320183   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:45.320189   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:45.320250   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:45.356210   78080 cri.go:89] found id: ""
	I0729 18:28:45.356236   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.356247   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:45.356254   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:45.356316   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:45.393083   78080 cri.go:89] found id: ""
	I0729 18:28:45.393116   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.393131   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:45.393139   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:45.393199   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:45.430235   78080 cri.go:89] found id: ""
	I0729 18:28:45.430263   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.430274   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:45.430282   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:45.430346   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:45.463068   78080 cri.go:89] found id: ""
	I0729 18:28:45.463132   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.463143   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:45.463155   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:45.463203   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:45.541411   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:45.541441   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:45.581967   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:45.582001   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:45.639427   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:45.639459   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:45.655715   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:45.655741   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:45.725820   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:42.943815   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:45.435444   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:46.739873   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:49.240293   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:46.042576   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:48.042735   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:48.226252   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:48.240419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:48.240494   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:48.271506   78080 cri.go:89] found id: ""
	I0729 18:28:48.271538   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.271550   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:48.271557   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:48.271615   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:48.305163   78080 cri.go:89] found id: ""
	I0729 18:28:48.305186   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.305198   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:48.305203   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:48.305252   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:48.336453   78080 cri.go:89] found id: ""
	I0729 18:28:48.336480   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.336492   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:48.336500   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:48.336557   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:48.368690   78080 cri.go:89] found id: ""
	I0729 18:28:48.368713   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.368720   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:48.368725   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:48.368784   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:48.401723   78080 cri.go:89] found id: ""
	I0729 18:28:48.401746   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.401753   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:48.401758   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:48.401822   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:48.439876   78080 cri.go:89] found id: ""
	I0729 18:28:48.439896   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.439903   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:48.439908   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:48.439956   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:48.473352   78080 cri.go:89] found id: ""
	I0729 18:28:48.473383   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.473394   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:48.473401   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:48.473461   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:48.506752   78080 cri.go:89] found id: ""
	I0729 18:28:48.506779   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.506788   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:48.506799   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:48.506815   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:48.547513   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:48.547535   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:48.599704   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:48.599733   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:48.613577   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:48.613604   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:48.681272   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:48.681290   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:48.681301   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:51.267397   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:51.280243   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:51.280317   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:51.314047   78080 cri.go:89] found id: ""
	I0729 18:28:51.314078   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.314090   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:51.314097   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:51.314162   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:51.346048   78080 cri.go:89] found id: ""
	I0729 18:28:51.346073   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.346080   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:51.346085   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:51.346144   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:51.380511   78080 cri.go:89] found id: ""
	I0729 18:28:51.380543   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.380553   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:51.380561   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:51.380637   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:51.415189   78080 cri.go:89] found id: ""
	I0729 18:28:51.415213   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.415220   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:51.415227   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:51.415310   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:51.454324   78080 cri.go:89] found id: ""
	I0729 18:28:51.454351   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.454380   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:51.454388   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:51.454449   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:51.488737   78080 cri.go:89] found id: ""
	I0729 18:28:51.488768   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.488779   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:51.488787   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:51.488848   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:51.528869   78080 cri.go:89] found id: ""
	I0729 18:28:51.528903   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.528912   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:51.528920   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:51.528972   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:51.566039   78080 cri.go:89] found id: ""
	I0729 18:28:51.566067   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.566075   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:51.566086   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:51.566102   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:51.604746   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:51.604774   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:51.661048   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:51.661089   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:51.675420   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:51.675447   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:51.754496   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:51.754531   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:51.754548   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:47.934575   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:49.935187   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:51.247773   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:53.740386   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:50.043378   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:52.543104   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:54.335796   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:54.350726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:54.350784   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:54.389661   78080 cri.go:89] found id: ""
	I0729 18:28:54.389683   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.389694   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:54.389701   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:54.389761   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:54.427073   78080 cri.go:89] found id: ""
	I0729 18:28:54.427100   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.427110   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:54.427117   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:54.427178   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:54.466761   78080 cri.go:89] found id: ""
	I0729 18:28:54.466793   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.466802   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:54.466808   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:54.466871   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:54.501115   78080 cri.go:89] found id: ""
	I0729 18:28:54.501144   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.501159   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:54.501167   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:54.501229   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:54.535430   78080 cri.go:89] found id: ""
	I0729 18:28:54.535461   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.535472   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:54.535480   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:54.535543   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:54.574994   78080 cri.go:89] found id: ""
	I0729 18:28:54.575024   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.575034   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:54.575041   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:54.575107   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:54.608770   78080 cri.go:89] found id: ""
	I0729 18:28:54.608792   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.608800   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:54.608805   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:54.608850   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:54.648026   78080 cri.go:89] found id: ""
	I0729 18:28:54.648050   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.648057   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:54.648066   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:54.648077   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:54.728445   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:54.728485   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:54.774752   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:54.774781   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:54.826549   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:54.826582   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:54.840366   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:54.840394   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:54.907422   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:52.434956   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:54.436125   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:56.933929   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:56.239045   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:58.239967   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:55.041898   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:57.042968   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:59.542837   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:57.408469   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:57.421855   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:57.421923   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:57.457794   78080 cri.go:89] found id: ""
	I0729 18:28:57.457816   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.457824   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:57.457829   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:57.457908   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:57.492851   78080 cri.go:89] found id: ""
	I0729 18:28:57.492880   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.492888   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:57.492894   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:57.492946   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:57.528221   78080 cri.go:89] found id: ""
	I0729 18:28:57.528249   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.528258   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:57.528265   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:57.528330   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:57.565504   78080 cri.go:89] found id: ""
	I0729 18:28:57.565536   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.565547   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:57.565554   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:57.565618   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:57.599391   78080 cri.go:89] found id: ""
	I0729 18:28:57.599418   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.599426   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:57.599432   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:57.599491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:57.643757   78080 cri.go:89] found id: ""
	I0729 18:28:57.643784   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.643798   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:57.643806   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:57.643867   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:57.680825   78080 cri.go:89] found id: ""
	I0729 18:28:57.680853   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.680864   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:57.680871   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:57.680936   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:57.714450   78080 cri.go:89] found id: ""
	I0729 18:28:57.714479   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.714490   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:57.714500   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:57.714516   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:57.798411   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:57.798437   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:57.798453   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:57.878210   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:57.878246   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:57.917476   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:57.917505   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:57.971395   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:57.971432   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:00.486419   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:00.500625   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:00.500703   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:00.539625   78080 cri.go:89] found id: ""
	I0729 18:29:00.539650   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.539659   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:00.539682   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:00.539737   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:00.577252   78080 cri.go:89] found id: ""
	I0729 18:29:00.577284   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.577297   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:00.577303   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:00.577350   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:00.611850   78080 cri.go:89] found id: ""
	I0729 18:29:00.611878   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.611886   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:00.611892   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:00.611939   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:00.648964   78080 cri.go:89] found id: ""
	I0729 18:29:00.648989   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.648996   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:00.649003   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:00.649062   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:00.686124   78080 cri.go:89] found id: ""
	I0729 18:29:00.686147   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.686156   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:00.686161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:00.686217   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:00.721166   78080 cri.go:89] found id: ""
	I0729 18:29:00.721195   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.721205   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:00.721213   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:00.721276   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:00.758394   78080 cri.go:89] found id: ""
	I0729 18:29:00.758423   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.758431   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:00.758436   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:00.758491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:00.793487   78080 cri.go:89] found id: ""
	I0729 18:29:00.793514   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.793523   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:00.793533   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:00.793549   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:00.807069   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:00.807106   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:00.880611   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:00.880629   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:00.880641   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:00.963534   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:00.963568   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:01.004145   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:01.004174   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:58.933964   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:00.934221   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:00.739676   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:02.741020   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:02.042346   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:04.541902   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:03.560985   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:03.574407   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:03.574476   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:03.608027   78080 cri.go:89] found id: ""
	I0729 18:29:03.608048   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.608057   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:03.608062   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:03.608119   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:03.644777   78080 cri.go:89] found id: ""
	I0729 18:29:03.644804   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.644814   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:03.644821   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:03.644895   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:03.684050   78080 cri.go:89] found id: ""
	I0729 18:29:03.684074   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.684082   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:03.684089   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:03.684149   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:03.724350   78080 cri.go:89] found id: ""
	I0729 18:29:03.724376   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.724383   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:03.724390   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:03.724439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:03.766859   78080 cri.go:89] found id: ""
	I0729 18:29:03.766887   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.766898   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:03.766905   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:03.766967   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:03.800535   78080 cri.go:89] found id: ""
	I0729 18:29:03.800562   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.800572   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:03.800579   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:03.800639   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:03.834991   78080 cri.go:89] found id: ""
	I0729 18:29:03.835011   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.835019   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:03.835024   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:03.835073   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:03.869159   78080 cri.go:89] found id: ""
	I0729 18:29:03.869191   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.869201   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:03.869211   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:03.869226   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:03.940451   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:03.940469   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:03.940487   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:04.020880   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:04.020910   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:04.064707   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:04.064728   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:04.121551   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:04.121587   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:06.636983   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:06.651500   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:06.651582   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:06.686556   78080 cri.go:89] found id: ""
	I0729 18:29:06.686582   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.686592   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:06.686599   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:06.686660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:06.721967   78080 cri.go:89] found id: ""
	I0729 18:29:06.721996   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.722008   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:06.722016   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:06.722115   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:06.760409   78080 cri.go:89] found id: ""
	I0729 18:29:06.760433   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.760440   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:06.760445   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:06.760499   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:06.794050   78080 cri.go:89] found id: ""
	I0729 18:29:06.794074   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.794081   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:06.794087   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:06.794143   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:06.826445   78080 cri.go:89] found id: ""
	I0729 18:29:06.826471   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.826478   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:06.826484   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:06.826544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:06.860680   78080 cri.go:89] found id: ""
	I0729 18:29:06.860700   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.860706   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:06.860712   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:06.860761   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:06.898192   78080 cri.go:89] found id: ""
	I0729 18:29:06.898215   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.898223   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:06.898229   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:06.898284   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:06.931892   78080 cri.go:89] found id: ""
	I0729 18:29:06.931920   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.931930   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:06.931940   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:06.931955   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:06.987265   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:06.987294   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:07.043520   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:07.043547   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:07.056995   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:07.057019   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:07.124932   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:07.124956   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:07.124971   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:03.435778   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:05.936004   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:05.239352   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:07.239383   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:06.542526   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:08.543497   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:09.708947   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:09.723497   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:09.723565   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:09.762686   78080 cri.go:89] found id: ""
	I0729 18:29:09.762714   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.762725   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:09.762733   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:09.762797   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:09.799674   78080 cri.go:89] found id: ""
	I0729 18:29:09.799699   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.799708   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:09.799715   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:09.799775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:09.836121   78080 cri.go:89] found id: ""
	I0729 18:29:09.836147   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.836156   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:09.836161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:09.836209   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:09.872758   78080 cri.go:89] found id: ""
	I0729 18:29:09.872783   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.872791   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:09.872797   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:09.872842   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:09.911681   78080 cri.go:89] found id: ""
	I0729 18:29:09.911711   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.911719   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:09.911724   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:09.911773   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:09.951531   78080 cri.go:89] found id: ""
	I0729 18:29:09.951554   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.951561   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:09.951567   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:09.951624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:09.985568   78080 cri.go:89] found id: ""
	I0729 18:29:09.985597   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.985606   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:09.985612   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:09.985661   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:10.020369   78080 cri.go:89] found id: ""
	I0729 18:29:10.020394   78080 logs.go:276] 0 containers: []
	W0729 18:29:10.020402   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:10.020409   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:10.020421   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:10.076538   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:10.076574   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:10.090954   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:10.090980   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:10.165843   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:10.165875   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:10.165890   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:10.242438   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:10.242469   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:08.434575   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:10.934523   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:09.744446   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:12.239540   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:14.242060   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:10.544272   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:13.043064   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:12.781369   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:12.797066   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:12.797160   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:12.832500   78080 cri.go:89] found id: ""
	I0729 18:29:12.832528   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.832545   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:12.832552   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:12.832615   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:12.866390   78080 cri.go:89] found id: ""
	I0729 18:29:12.866420   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.866428   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:12.866434   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:12.866494   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:12.901616   78080 cri.go:89] found id: ""
	I0729 18:29:12.901636   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.901644   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:12.901649   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:12.901713   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:12.935954   78080 cri.go:89] found id: ""
	I0729 18:29:12.935976   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.935985   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:12.935993   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:12.936053   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:12.970570   78080 cri.go:89] found id: ""
	I0729 18:29:12.970623   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.970637   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:12.970645   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:12.970702   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:13.008629   78080 cri.go:89] found id: ""
	I0729 18:29:13.008658   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.008666   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:13.008672   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:13.008725   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:13.045689   78080 cri.go:89] found id: ""
	I0729 18:29:13.045713   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.045721   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:13.045726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:13.045773   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:13.084707   78080 cri.go:89] found id: ""
	I0729 18:29:13.084735   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.084745   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:13.084756   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:13.084774   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:13.161884   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:13.161920   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:13.205377   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:13.205410   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:13.258161   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:13.258189   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:13.272208   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:13.272240   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:13.347519   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:15.848068   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:15.861773   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:15.861851   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:15.902421   78080 cri.go:89] found id: ""
	I0729 18:29:15.902449   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.902458   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:15.902466   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:15.902532   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:15.939552   78080 cri.go:89] found id: ""
	I0729 18:29:15.939576   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.939583   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:15.939588   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:15.939645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:15.974424   78080 cri.go:89] found id: ""
	I0729 18:29:15.974454   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.974463   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:15.974468   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:15.974516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:16.010955   78080 cri.go:89] found id: ""
	I0729 18:29:16.010993   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.011000   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:16.011006   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:16.011062   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:16.046785   78080 cri.go:89] found id: ""
	I0729 18:29:16.046815   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.046825   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:16.046832   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:16.046887   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:16.082691   78080 cri.go:89] found id: ""
	I0729 18:29:16.082721   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.082731   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:16.082739   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:16.082796   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:16.127633   78080 cri.go:89] found id: ""
	I0729 18:29:16.127663   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.127676   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:16.127684   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:16.127741   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:16.162641   78080 cri.go:89] found id: ""
	I0729 18:29:16.162662   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.162670   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:16.162684   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:16.162695   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:16.215132   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:16.215162   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:16.229581   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:16.229607   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:16.303178   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:16.303198   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:16.303212   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:16.383739   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:16.383775   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:12.934751   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:14.934965   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:16.739047   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:18.739145   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:15.043163   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:17.544340   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:18.924292   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:18.937571   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:18.937626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:18.970523   78080 cri.go:89] found id: ""
	I0729 18:29:18.970554   78080 logs.go:276] 0 containers: []
	W0729 18:29:18.970563   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:18.970568   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:18.970624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:19.005448   78080 cri.go:89] found id: ""
	I0729 18:29:19.005471   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.005478   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:19.005483   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:19.005538   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:19.044352   78080 cri.go:89] found id: ""
	I0729 18:29:19.044377   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.044386   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:19.044393   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:19.044448   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:19.079288   78080 cri.go:89] found id: ""
	I0729 18:29:19.079317   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.079327   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:19.079333   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:19.079402   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:19.122932   78080 cri.go:89] found id: ""
	I0729 18:29:19.122954   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.122961   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:19.122967   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:19.123020   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:19.166992   78080 cri.go:89] found id: ""
	I0729 18:29:19.167018   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.167025   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:19.167031   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:19.167103   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:19.215301   78080 cri.go:89] found id: ""
	I0729 18:29:19.215331   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.215341   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:19.215355   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:19.215419   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:19.267635   78080 cri.go:89] found id: ""
	I0729 18:29:19.267657   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.267664   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:19.267671   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:19.267682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:19.319924   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:19.319962   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:19.333987   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:19.334010   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:19.406541   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:19.406558   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:19.406571   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:19.487388   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:19.487426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:22.027745   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:22.041145   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:22.041218   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:22.080000   78080 cri.go:89] found id: ""
	I0729 18:29:22.080022   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.080029   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:22.080034   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:22.080079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:22.116385   78080 cri.go:89] found id: ""
	I0729 18:29:22.116415   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.116425   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:22.116431   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:22.116492   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:22.150530   78080 cri.go:89] found id: ""
	I0729 18:29:22.150552   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.150559   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:22.150565   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:22.150621   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:22.188782   78080 cri.go:89] found id: ""
	I0729 18:29:22.188808   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.188817   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:22.188822   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:22.188873   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:17.434007   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:19.434864   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:21.935573   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:20.739852   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:23.239853   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:20.044010   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:22.542952   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:24.543614   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:22.227117   78080 cri.go:89] found id: ""
	I0729 18:29:22.227152   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.227162   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:22.227169   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:22.227234   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:22.263057   78080 cri.go:89] found id: ""
	I0729 18:29:22.263079   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.263086   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:22.263091   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:22.263145   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:22.297368   78080 cri.go:89] found id: ""
	I0729 18:29:22.297391   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.297399   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:22.297406   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:22.297466   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:22.334117   78080 cri.go:89] found id: ""
	I0729 18:29:22.334149   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.334159   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:22.334170   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:22.334184   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:22.349344   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:22.349369   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:22.415720   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:22.415743   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:22.415758   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:22.494937   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:22.494971   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:22.536352   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:22.536382   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:25.087795   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:25.103985   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:25.104050   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:25.158532   78080 cri.go:89] found id: ""
	I0729 18:29:25.158562   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.158572   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:25.158580   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:25.158641   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:25.216740   78080 cri.go:89] found id: ""
	I0729 18:29:25.216762   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.216769   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:25.216775   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:25.216827   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:25.254827   78080 cri.go:89] found id: ""
	I0729 18:29:25.254855   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.254865   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:25.254872   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:25.254934   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:25.289377   78080 cri.go:89] found id: ""
	I0729 18:29:25.289407   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.289417   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:25.289424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:25.289484   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:25.328111   78080 cri.go:89] found id: ""
	I0729 18:29:25.328144   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.328153   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:25.328161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:25.328224   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:25.364779   78080 cri.go:89] found id: ""
	I0729 18:29:25.364808   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.364815   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:25.364827   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:25.364874   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:25.402906   78080 cri.go:89] found id: ""
	I0729 18:29:25.402935   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.402942   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:25.402948   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:25.403007   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:25.438747   78080 cri.go:89] found id: ""
	I0729 18:29:25.438770   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.438778   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:25.438787   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:25.438803   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:25.452803   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:25.452829   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:25.527575   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:25.527593   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:25.527610   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:25.622437   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:25.622482   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:25.661451   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:25.661478   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:23.936249   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:26.434496   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:25.739358   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:27.739702   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:27.043125   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:29.542130   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:28.213898   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:28.230013   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:28.230071   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:28.265484   78080 cri.go:89] found id: ""
	I0729 18:29:28.265511   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.265521   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:28.265530   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:28.265594   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:28.306374   78080 cri.go:89] found id: ""
	I0729 18:29:28.306428   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.306441   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:28.306448   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:28.306501   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:28.340274   78080 cri.go:89] found id: ""
	I0729 18:29:28.340299   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.340309   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:28.340316   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:28.340379   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:28.373928   78080 cri.go:89] found id: ""
	I0729 18:29:28.373973   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.373982   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:28.373990   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:28.374052   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:28.407075   78080 cri.go:89] found id: ""
	I0729 18:29:28.407107   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.407120   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:28.407129   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:28.407215   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:28.444501   78080 cri.go:89] found id: ""
	I0729 18:29:28.444528   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.444536   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:28.444543   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:28.444614   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:28.487513   78080 cri.go:89] found id: ""
	I0729 18:29:28.487540   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.487548   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:28.487554   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:28.487611   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:28.521957   78080 cri.go:89] found id: ""
	I0729 18:29:28.521990   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.522000   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:28.522011   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:28.522027   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:28.536880   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:28.536918   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:28.609486   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:28.609513   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:28.609528   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:28.694086   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:28.694125   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:28.733930   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:28.733964   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:31.292260   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:31.305840   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:31.305899   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:31.342510   78080 cri.go:89] found id: ""
	I0729 18:29:31.342539   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.342550   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:31.342557   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:31.342613   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:31.375093   78080 cri.go:89] found id: ""
	I0729 18:29:31.375118   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.375128   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:31.375135   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:31.375198   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:31.408554   78080 cri.go:89] found id: ""
	I0729 18:29:31.408576   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.408583   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:31.408588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:31.408660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:31.448748   78080 cri.go:89] found id: ""
	I0729 18:29:31.448774   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.448783   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:31.448796   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:31.448855   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:31.483541   78080 cri.go:89] found id: ""
	I0729 18:29:31.483564   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.483572   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:31.483578   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:31.483637   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:31.518173   78080 cri.go:89] found id: ""
	I0729 18:29:31.518198   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.518209   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:31.518217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:31.518279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:31.553345   78080 cri.go:89] found id: ""
	I0729 18:29:31.553371   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.553379   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:31.553384   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:31.553439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:31.591857   78080 cri.go:89] found id: ""
	I0729 18:29:31.591887   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.591905   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:31.591916   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:31.591929   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:31.648404   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:31.648436   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:31.661455   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:31.661477   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:31.732978   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:31.732997   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:31.733009   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:31.812105   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:31.812145   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:28.435517   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:30.436822   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:30.239755   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:32.739231   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.739534   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:31.542847   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:33.543096   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.353079   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:34.366759   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:34.366817   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:34.400944   78080 cri.go:89] found id: ""
	I0729 18:29:34.400974   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.400984   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:34.400991   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:34.401055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:34.439348   78080 cri.go:89] found id: ""
	I0729 18:29:34.439373   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.439383   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:34.439395   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:34.439444   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:34.473969   78080 cri.go:89] found id: ""
	I0729 18:29:34.473991   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.474010   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:34.474017   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:34.474080   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:34.507741   78080 cri.go:89] found id: ""
	I0729 18:29:34.507770   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.507778   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:34.507784   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:34.507845   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:34.543794   78080 cri.go:89] found id: ""
	I0729 18:29:34.543815   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.543823   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:34.543830   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:34.543895   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:34.577893   78080 cri.go:89] found id: ""
	I0729 18:29:34.577918   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.577926   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:34.577931   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:34.577978   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:34.612703   78080 cri.go:89] found id: ""
	I0729 18:29:34.612735   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.612745   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:34.612752   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:34.612815   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:34.648167   78080 cri.go:89] found id: ""
	I0729 18:29:34.648197   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.648209   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:34.648219   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:34.648233   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:34.689821   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:34.689848   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:34.743902   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:34.743935   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:34.757400   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:34.757426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:34.833684   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:34.833706   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:34.833721   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:32.934207   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.936549   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:37.238618   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:39.239761   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:36.042461   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:38.543304   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:37.419270   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:37.433249   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:37.433301   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:37.469991   78080 cri.go:89] found id: ""
	I0729 18:29:37.470021   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.470031   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:37.470038   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:37.470098   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:37.504511   78080 cri.go:89] found id: ""
	I0729 18:29:37.504537   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.504548   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:37.504554   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:37.504612   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:37.545304   78080 cri.go:89] found id: ""
	I0729 18:29:37.545332   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.545342   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:37.545349   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:37.545406   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:37.584255   78080 cri.go:89] found id: ""
	I0729 18:29:37.584280   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.584287   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:37.584292   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:37.584345   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:37.620917   78080 cri.go:89] found id: ""
	I0729 18:29:37.620943   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.620951   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:37.620958   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:37.621022   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:37.659381   78080 cri.go:89] found id: ""
	I0729 18:29:37.659405   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.659414   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:37.659419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:37.659486   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:37.701337   78080 cri.go:89] found id: ""
	I0729 18:29:37.701360   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.701368   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:37.701373   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:37.701426   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:37.737142   78080 cri.go:89] found id: ""
	I0729 18:29:37.737168   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.737177   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:37.737186   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:37.737201   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:37.789951   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:37.789992   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:37.804759   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:37.804784   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:37.881777   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:37.881794   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:37.881808   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:37.970593   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:37.970625   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:40.511557   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:40.525472   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:40.525527   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:40.564227   78080 cri.go:89] found id: ""
	I0729 18:29:40.564253   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.564263   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:40.564270   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:40.564336   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:40.600384   78080 cri.go:89] found id: ""
	I0729 18:29:40.600409   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.600417   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:40.600423   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:40.600475   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:40.634819   78080 cri.go:89] found id: ""
	I0729 18:29:40.634843   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.634858   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:40.634866   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:40.634913   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:40.669963   78080 cri.go:89] found id: ""
	I0729 18:29:40.669991   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.669999   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:40.670006   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:40.670069   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:40.705680   78080 cri.go:89] found id: ""
	I0729 18:29:40.705705   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.705714   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:40.705719   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:40.705775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:40.743691   78080 cri.go:89] found id: ""
	I0729 18:29:40.743715   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.743725   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:40.743732   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:40.743820   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:40.783858   78080 cri.go:89] found id: ""
	I0729 18:29:40.783889   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.783898   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:40.783903   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:40.783953   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:40.821499   78080 cri.go:89] found id: ""
	I0729 18:29:40.821527   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.821537   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:40.821547   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:40.821562   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:40.874941   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:40.874972   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:40.888034   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:40.888057   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:40.960013   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:40.960032   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:40.960044   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:41.043013   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:41.043042   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:37.435119   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:39.435967   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:41.934232   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:41.739070   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.739497   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:40.543453   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.042528   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.583555   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:43.597120   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:43.597193   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:43.631500   78080 cri.go:89] found id: ""
	I0729 18:29:43.631526   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.631535   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:43.631542   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:43.631607   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:43.667003   78080 cri.go:89] found id: ""
	I0729 18:29:43.667029   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.667037   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:43.667042   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:43.667102   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:43.701471   78080 cri.go:89] found id: ""
	I0729 18:29:43.701502   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.701510   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:43.701515   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:43.701569   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:43.740037   78080 cri.go:89] found id: ""
	I0729 18:29:43.740058   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.740067   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:43.740074   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:43.740145   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:43.772584   78080 cri.go:89] found id: ""
	I0729 18:29:43.772610   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.772620   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:43.772626   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:43.772689   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:43.806340   78080 cri.go:89] found id: ""
	I0729 18:29:43.806382   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.806393   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:43.806401   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:43.806480   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:43.840085   78080 cri.go:89] found id: ""
	I0729 18:29:43.840109   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.840118   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:43.840133   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:43.840198   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:43.873412   78080 cri.go:89] found id: ""
	I0729 18:29:43.873438   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.873448   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:43.873458   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:43.873473   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:43.928762   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:43.928790   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:43.944129   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:43.944156   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:44.017330   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:44.017349   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:44.017361   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:44.106858   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:44.106915   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:46.651050   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:46.665253   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:46.665310   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:46.698846   78080 cri.go:89] found id: ""
	I0729 18:29:46.698871   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.698881   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:46.698888   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:46.698956   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:46.734354   78080 cri.go:89] found id: ""
	I0729 18:29:46.734395   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.734405   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:46.734413   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:46.734468   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:46.771978   78080 cri.go:89] found id: ""
	I0729 18:29:46.771999   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.772007   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:46.772012   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:46.772059   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:46.807231   78080 cri.go:89] found id: ""
	I0729 18:29:46.807255   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.807263   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:46.807272   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:46.807329   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:46.842257   78080 cri.go:89] found id: ""
	I0729 18:29:46.842278   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.842306   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:46.842312   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:46.842373   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:46.876287   78080 cri.go:89] found id: ""
	I0729 18:29:46.876309   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.876317   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:46.876323   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:46.876389   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:46.909695   78080 cri.go:89] found id: ""
	I0729 18:29:46.909719   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.909726   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:46.909731   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:46.909806   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:46.951768   78080 cri.go:89] found id: ""
	I0729 18:29:46.951798   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.951807   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:46.951815   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:46.951825   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:47.025467   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:47.025485   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:47.025497   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:47.106336   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:47.106391   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:47.145652   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:47.145682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:47.200857   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:47.200886   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:43.935210   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:46.434346   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:45.739606   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:48.240282   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:45.544442   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:48.042872   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:49.715401   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:49.729703   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:49.729776   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:49.770016   78080 cri.go:89] found id: ""
	I0729 18:29:49.770039   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.770062   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:49.770070   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:49.770127   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:49.805464   78080 cri.go:89] found id: ""
	I0729 18:29:49.805487   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.805495   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:49.805500   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:49.805560   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:49.838739   78080 cri.go:89] found id: ""
	I0729 18:29:49.838770   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.838782   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:49.838789   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:49.838861   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:49.881168   78080 cri.go:89] found id: ""
	I0729 18:29:49.881194   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.881202   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:49.881208   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:49.881269   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:49.919978   78080 cri.go:89] found id: ""
	I0729 18:29:49.919999   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.920006   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:49.920012   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:49.920079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:49.958971   78080 cri.go:89] found id: ""
	I0729 18:29:49.958996   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.959006   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:49.959013   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:49.959063   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:50.001253   78080 cri.go:89] found id: ""
	I0729 18:29:50.001281   78080 logs.go:276] 0 containers: []
	W0729 18:29:50.001291   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:50.001298   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:50.001362   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:50.038729   78080 cri.go:89] found id: ""
	I0729 18:29:50.038755   78080 logs.go:276] 0 containers: []
	W0729 18:29:50.038766   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:50.038776   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:50.038789   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:50.082540   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:50.082567   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:50.132372   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:50.132413   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:50.146806   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:50.146835   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:50.214495   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:50.214515   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:50.214532   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:48.435540   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.935475   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.240626   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.739158   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.044073   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.047924   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:54.542657   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.793987   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:52.808085   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:52.808149   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:52.844869   78080 cri.go:89] found id: ""
	I0729 18:29:52.844904   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.844917   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:52.844925   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:52.844986   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:52.878097   78080 cri.go:89] found id: ""
	I0729 18:29:52.878122   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.878135   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:52.878142   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:52.878191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:52.910843   78080 cri.go:89] found id: ""
	I0729 18:29:52.910884   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.910894   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:52.910902   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:52.910953   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:52.943233   78080 cri.go:89] found id: ""
	I0729 18:29:52.943257   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.943267   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:52.943274   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:52.943335   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:52.978354   78080 cri.go:89] found id: ""
	I0729 18:29:52.978402   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.978413   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:52.978423   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:52.978503   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:53.011238   78080 cri.go:89] found id: ""
	I0729 18:29:53.011266   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.011276   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:53.011283   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:53.011336   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:53.048787   78080 cri.go:89] found id: ""
	I0729 18:29:53.048817   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.048827   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:53.048834   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:53.048900   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:53.086108   78080 cri.go:89] found id: ""
	I0729 18:29:53.086135   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.086156   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:53.086176   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:53.086195   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:53.137552   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:53.137580   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:53.151308   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:53.151333   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:53.225968   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:53.225992   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:53.226004   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:53.308111   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:53.308145   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:55.850207   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:55.864003   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:55.864054   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:55.898109   78080 cri.go:89] found id: ""
	I0729 18:29:55.898134   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.898142   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:55.898148   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:55.898201   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:55.931616   78080 cri.go:89] found id: ""
	I0729 18:29:55.931643   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.931653   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:55.931660   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:55.931719   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:55.969034   78080 cri.go:89] found id: ""
	I0729 18:29:55.969063   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.969073   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:55.969080   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:55.969142   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:56.007552   78080 cri.go:89] found id: ""
	I0729 18:29:56.007576   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.007586   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:56.007592   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:56.007653   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:56.044342   78080 cri.go:89] found id: ""
	I0729 18:29:56.044367   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.044376   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:56.044382   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:56.044437   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:56.078352   78080 cri.go:89] found id: ""
	I0729 18:29:56.078396   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.078412   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:56.078420   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:56.078471   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:56.116505   78080 cri.go:89] found id: ""
	I0729 18:29:56.116532   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.116543   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:56.116551   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:56.116611   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:56.151493   78080 cri.go:89] found id: ""
	I0729 18:29:56.151516   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.151523   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:56.151530   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:56.151542   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:56.206170   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:56.206198   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:56.219658   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:56.219684   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:56.290279   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:56.290300   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:56.290312   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:56.371352   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:56.371382   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:53.434046   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:55.435343   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:55.239055   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:57.241032   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.740003   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:57.041745   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.042416   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:58.908793   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:58.922566   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:58.922626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:58.959375   78080 cri.go:89] found id: ""
	I0729 18:29:58.959397   78080 logs.go:276] 0 containers: []
	W0729 18:29:58.959404   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:58.959410   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:58.959459   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:58.993235   78080 cri.go:89] found id: ""
	I0729 18:29:58.993257   78080 logs.go:276] 0 containers: []
	W0729 18:29:58.993265   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:58.993271   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:58.993331   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:59.028186   78080 cri.go:89] found id: ""
	I0729 18:29:59.028212   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.028220   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:59.028225   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:59.028271   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:59.063589   78080 cri.go:89] found id: ""
	I0729 18:29:59.063619   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.063628   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:59.063635   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:59.063695   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:59.101116   78080 cri.go:89] found id: ""
	I0729 18:29:59.101142   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.101152   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:59.101158   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:59.101208   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:59.135288   78080 cri.go:89] found id: ""
	I0729 18:29:59.135314   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.135324   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:59.135332   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:59.135395   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:59.170520   78080 cri.go:89] found id: ""
	I0729 18:29:59.170549   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.170557   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:59.170562   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:59.170618   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:59.229796   78080 cri.go:89] found id: ""
	I0729 18:29:59.229825   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.229835   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:59.229843   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:59.229871   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:59.244654   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:59.244682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:59.321262   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:59.321286   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:59.321301   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:59.401423   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:59.401459   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:59.442916   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:59.442938   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:01.995116   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:02.008454   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:02.008516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:02.046412   78080 cri.go:89] found id: ""
	I0729 18:30:02.046431   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.046438   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:02.046443   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:02.046487   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:02.082444   78080 cri.go:89] found id: ""
	I0729 18:30:02.082466   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.082476   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:02.082482   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:02.082551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:02.116013   78080 cri.go:89] found id: ""
	I0729 18:30:02.116041   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.116052   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:02.116058   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:02.116127   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:02.155817   78080 cri.go:89] found id: ""
	I0729 18:30:02.155844   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.155854   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:02.155862   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:02.155914   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:02.195518   78080 cri.go:89] found id: ""
	I0729 18:30:02.195548   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.195556   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:02.195563   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:02.195624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:57.934058   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.934547   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.935238   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.742050   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:04.239758   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.043550   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:03.542544   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:02.228248   78080 cri.go:89] found id: ""
	I0729 18:30:02.228274   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.228283   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:02.228289   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:02.228370   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:02.262441   78080 cri.go:89] found id: ""
	I0729 18:30:02.262469   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.262479   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:02.262486   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:02.262546   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:02.296900   78080 cri.go:89] found id: ""
	I0729 18:30:02.296930   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.296937   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:02.296953   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:02.296965   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:02.352356   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:02.352389   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:02.366336   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:02.366365   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:02.441367   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:02.441389   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:02.441403   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:02.524134   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:02.524173   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:05.071581   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:05.085481   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:05.085535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:05.121610   78080 cri.go:89] found id: ""
	I0729 18:30:05.121636   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.121644   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:05.121652   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:05.121716   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:05.157382   78080 cri.go:89] found id: ""
	I0729 18:30:05.157406   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.157413   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:05.157418   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:05.157478   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:05.195552   78080 cri.go:89] found id: ""
	I0729 18:30:05.195582   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.195593   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:05.195600   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:05.195657   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:05.231071   78080 cri.go:89] found id: ""
	I0729 18:30:05.231095   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.231103   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:05.231108   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:05.231165   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:05.267445   78080 cri.go:89] found id: ""
	I0729 18:30:05.267474   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.267485   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:05.267493   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:05.267555   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:05.304258   78080 cri.go:89] found id: ""
	I0729 18:30:05.304279   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.304286   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:05.304291   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:05.304338   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:05.339155   78080 cri.go:89] found id: ""
	I0729 18:30:05.339176   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.339184   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:05.339190   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:05.339243   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:05.375291   78080 cri.go:89] found id: ""
	I0729 18:30:05.375328   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.375337   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:05.375346   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:05.375361   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:05.446196   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:05.446221   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:05.446236   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:05.529421   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:05.529457   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:05.570234   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:05.570269   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:05.629349   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:05.629391   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:04.434625   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:06.934246   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:06.239886   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.242421   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:05.543394   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.042242   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.151320   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:08.165983   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:08.166045   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:08.205703   78080 cri.go:89] found id: ""
	I0729 18:30:08.205726   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.205733   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:08.205738   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:08.205786   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:08.245919   78080 cri.go:89] found id: ""
	I0729 18:30:08.245946   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.245957   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:08.245964   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:08.246024   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:08.286595   78080 cri.go:89] found id: ""
	I0729 18:30:08.286621   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.286631   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:08.286638   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:08.286700   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:08.330032   78080 cri.go:89] found id: ""
	I0729 18:30:08.330060   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.330070   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:08.330077   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:08.330140   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:08.362535   78080 cri.go:89] found id: ""
	I0729 18:30:08.362567   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.362578   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:08.362586   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:08.362645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:08.397648   78080 cri.go:89] found id: ""
	I0729 18:30:08.397678   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.397688   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:08.397704   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:08.397766   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:08.433615   78080 cri.go:89] found id: ""
	I0729 18:30:08.433693   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.433716   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:08.433734   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:08.433809   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:08.465765   78080 cri.go:89] found id: ""
	I0729 18:30:08.465792   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.465803   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:08.465814   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:08.465829   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:08.536332   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:08.536360   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:08.536375   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:08.613737   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:08.613776   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:08.659707   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:08.659736   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:08.712702   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:08.712736   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:11.226660   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:11.240852   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:11.240919   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:11.277632   78080 cri.go:89] found id: ""
	I0729 18:30:11.277664   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.277675   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:11.277682   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:11.277751   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:11.312458   78080 cri.go:89] found id: ""
	I0729 18:30:11.312478   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.312485   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:11.312491   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:11.312551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:11.350375   78080 cri.go:89] found id: ""
	I0729 18:30:11.350406   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.350416   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:11.350424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:11.350486   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:11.389280   78080 cri.go:89] found id: ""
	I0729 18:30:11.389307   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.389317   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:11.389324   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:11.389382   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:11.424907   78080 cri.go:89] found id: ""
	I0729 18:30:11.424936   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.424944   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:11.424949   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:11.425009   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:11.480686   78080 cri.go:89] found id: ""
	I0729 18:30:11.480713   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.480720   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:11.480726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:11.480778   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:11.514831   78080 cri.go:89] found id: ""
	I0729 18:30:11.514857   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.514864   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:11.514870   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:11.514917   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:11.547930   78080 cri.go:89] found id: ""
	I0729 18:30:11.547955   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.547964   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:11.547974   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:11.547989   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:11.586068   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:11.586098   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:11.646857   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:11.646892   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:11.663549   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:11.663576   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:11.731362   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:11.731383   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:11.731397   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:08.934638   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:11.434765   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:10.738608   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:12.740637   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:10.042514   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:12.042731   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.042952   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.315531   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:14.330485   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:14.330544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:14.363403   78080 cri.go:89] found id: ""
	I0729 18:30:14.363433   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.363444   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:14.363451   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:14.363516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:14.401204   78080 cri.go:89] found id: ""
	I0729 18:30:14.401227   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.401234   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:14.401240   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:14.401301   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:14.436737   78080 cri.go:89] found id: ""
	I0729 18:30:14.436765   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.436775   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:14.436782   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:14.436844   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:14.471376   78080 cri.go:89] found id: ""
	I0729 18:30:14.471403   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.471411   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:14.471419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:14.471478   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:14.506883   78080 cri.go:89] found id: ""
	I0729 18:30:14.506914   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.506925   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:14.506932   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:14.506990   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:14.546444   78080 cri.go:89] found id: ""
	I0729 18:30:14.546469   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.546479   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:14.546486   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:14.546552   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:14.580282   78080 cri.go:89] found id: ""
	I0729 18:30:14.580313   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.580320   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:14.580326   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:14.580387   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:14.614185   78080 cri.go:89] found id: ""
	I0729 18:30:14.614210   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.614220   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:14.614231   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:14.614246   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:14.652588   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:14.652610   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:14.706056   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:14.706090   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:14.719332   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:14.719356   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:14.792087   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:14.792115   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:14.792136   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:13.934967   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:16.435238   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.740676   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:17.239466   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:19.239656   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:16.541564   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:18.547053   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:17.375639   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:17.389473   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:17.389535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:17.424485   78080 cri.go:89] found id: ""
	I0729 18:30:17.424513   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.424521   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:17.424527   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:17.424572   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:17.461100   78080 cri.go:89] found id: ""
	I0729 18:30:17.461129   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.461136   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:17.461141   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:17.461191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:17.494866   78080 cri.go:89] found id: ""
	I0729 18:30:17.494894   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.494902   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:17.494907   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:17.494983   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:17.529897   78080 cri.go:89] found id: ""
	I0729 18:30:17.529924   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.529934   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:17.529940   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:17.530002   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:17.569870   78080 cri.go:89] found id: ""
	I0729 18:30:17.569897   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.569905   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:17.569910   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:17.569958   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:17.605324   78080 cri.go:89] found id: ""
	I0729 18:30:17.605364   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.605384   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:17.605392   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:17.605457   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:17.640552   78080 cri.go:89] found id: ""
	I0729 18:30:17.640583   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.640595   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:17.640602   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:17.640668   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:17.679769   78080 cri.go:89] found id: ""
	I0729 18:30:17.679800   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.679808   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:17.679827   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:17.679843   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:17.757782   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:17.757814   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:17.803850   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:17.803878   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:17.857987   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:17.858017   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:17.871062   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:17.871086   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:17.940456   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:20.441171   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:20.454752   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:20.454824   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:20.490744   78080 cri.go:89] found id: ""
	I0729 18:30:20.490773   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.490783   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:20.490791   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:20.490853   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:20.524406   78080 cri.go:89] found id: ""
	I0729 18:30:20.524437   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.524448   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:20.524463   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:20.524515   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:20.559225   78080 cri.go:89] found id: ""
	I0729 18:30:20.559257   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.559268   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:20.559275   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:20.559337   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:20.595297   78080 cri.go:89] found id: ""
	I0729 18:30:20.595324   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.595355   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:20.595364   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:20.595436   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:20.632176   78080 cri.go:89] found id: ""
	I0729 18:30:20.632204   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.632215   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:20.632222   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:20.632282   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:20.676600   78080 cri.go:89] found id: ""
	I0729 18:30:20.676625   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.676632   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:20.676638   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:20.676734   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:20.717920   78080 cri.go:89] found id: ""
	I0729 18:30:20.717945   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.717955   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:20.717966   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:20.718021   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:20.756217   78080 cri.go:89] found id: ""
	I0729 18:30:20.756243   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.756253   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:20.756262   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:20.756277   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:20.837150   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:20.837189   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:20.876023   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:20.876050   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:20.932402   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:20.932429   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:20.947422   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:20.947454   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:21.022698   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:18.934790   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.434992   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.242999   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.739073   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.042689   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.042794   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.523141   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:23.538019   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:23.538098   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:23.576953   78080 cri.go:89] found id: ""
	I0729 18:30:23.576979   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.576991   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:23.576998   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:23.577060   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:23.613052   78080 cri.go:89] found id: ""
	I0729 18:30:23.613083   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.613094   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:23.613100   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:23.613170   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:23.648694   78080 cri.go:89] found id: ""
	I0729 18:30:23.648717   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.648725   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:23.648730   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:23.648775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:23.680939   78080 cri.go:89] found id: ""
	I0729 18:30:23.680965   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.680972   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:23.680977   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:23.681032   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:23.716529   78080 cri.go:89] found id: ""
	I0729 18:30:23.716556   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.716564   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:23.716569   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:23.716628   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:23.756833   78080 cri.go:89] found id: ""
	I0729 18:30:23.756860   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.756868   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:23.756873   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:23.756918   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:23.796436   78080 cri.go:89] found id: ""
	I0729 18:30:23.796460   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.796467   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:23.796472   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:23.796519   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:23.839877   78080 cri.go:89] found id: ""
	I0729 18:30:23.839906   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.839914   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:23.839922   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:23.839934   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:23.879423   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:23.879447   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:23.928379   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:23.928408   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:23.942639   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:23.942669   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:24.014068   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:24.014095   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:24.014110   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:26.597923   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:26.610877   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:26.610945   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:26.647550   78080 cri.go:89] found id: ""
	I0729 18:30:26.647579   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.647590   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:26.647598   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:26.647655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:26.681552   78080 cri.go:89] found id: ""
	I0729 18:30:26.681581   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.681589   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:26.681595   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:26.681660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:26.714475   78080 cri.go:89] found id: ""
	I0729 18:30:26.714503   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.714513   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:26.714519   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:26.714588   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:26.748671   78080 cri.go:89] found id: ""
	I0729 18:30:26.748697   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.748707   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:26.748714   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:26.748775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:26.781380   78080 cri.go:89] found id: ""
	I0729 18:30:26.781406   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.781421   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:26.781429   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:26.781483   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:26.815201   78080 cri.go:89] found id: ""
	I0729 18:30:26.815230   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.815243   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:26.815251   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:26.815318   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:26.848600   78080 cri.go:89] found id: ""
	I0729 18:30:26.848628   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.848637   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:26.848644   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:26.848724   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:26.883828   78080 cri.go:89] found id: ""
	I0729 18:30:26.883872   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.883883   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:26.883893   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:26.883908   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:26.936955   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:26.936987   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:26.952212   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:26.952238   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:27.019389   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:27.019413   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:27.019426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:27.095654   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:27.095682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:23.935397   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:26.435231   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:26.238749   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:28.239699   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:25.044320   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:27.542022   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:29.542274   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:29.637269   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:29.652138   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:29.652211   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:29.691063   78080 cri.go:89] found id: ""
	I0729 18:30:29.691094   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.691104   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:29.691111   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:29.691173   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:29.725188   78080 cri.go:89] found id: ""
	I0729 18:30:29.725224   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.725232   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:29.725240   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:29.725308   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:29.764118   78080 cri.go:89] found id: ""
	I0729 18:30:29.764149   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.764159   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:29.764167   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:29.764232   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:29.797884   78080 cri.go:89] found id: ""
	I0729 18:30:29.797909   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.797919   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:29.797927   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:29.797989   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:29.838784   78080 cri.go:89] found id: ""
	I0729 18:30:29.838808   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.838815   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:29.838821   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:29.838885   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:29.872394   78080 cri.go:89] found id: ""
	I0729 18:30:29.872420   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.872427   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:29.872433   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:29.872491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:29.908966   78080 cri.go:89] found id: ""
	I0729 18:30:29.908995   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.909012   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:29.909020   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:29.909081   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:29.946322   78080 cri.go:89] found id: ""
	I0729 18:30:29.946344   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.946352   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:29.946371   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:29.946386   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:30.019133   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:30.019166   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:30.019179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:30.096499   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:30.096532   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:30.136487   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:30.136519   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:30.187341   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:30.187374   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:28.435472   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:30.934817   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:30.739101   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.742029   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.042850   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:34.042919   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.703546   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:32.716981   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:32.717042   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:32.753275   78080 cri.go:89] found id: ""
	I0729 18:30:32.753307   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.753318   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:32.753326   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:32.753393   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:32.789075   78080 cri.go:89] found id: ""
	I0729 18:30:32.789105   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.789116   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:32.789123   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:32.789185   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:32.822945   78080 cri.go:89] found id: ""
	I0729 18:30:32.822971   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.822979   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:32.822984   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:32.823033   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:32.856523   78080 cri.go:89] found id: ""
	I0729 18:30:32.856577   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.856589   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:32.856597   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:32.856661   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:32.895768   78080 cri.go:89] found id: ""
	I0729 18:30:32.895798   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.895810   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:32.895817   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:32.895876   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:32.934990   78080 cri.go:89] found id: ""
	I0729 18:30:32.935030   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.935042   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:32.935054   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:32.935132   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:32.970924   78080 cri.go:89] found id: ""
	I0729 18:30:32.970949   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.970957   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:32.970964   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:32.971022   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:33.004133   78080 cri.go:89] found id: ""
	I0729 18:30:33.004164   78080 logs.go:276] 0 containers: []
	W0729 18:30:33.004173   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:33.004182   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:33.004202   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:33.043432   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:33.043467   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:33.095517   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:33.095554   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:33.108859   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:33.108889   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:33.180661   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:33.180681   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:33.180696   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:35.763324   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:35.777060   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:35.777138   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:35.812601   78080 cri.go:89] found id: ""
	I0729 18:30:35.812636   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.812647   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:35.812654   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:35.812719   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:35.848116   78080 cri.go:89] found id: ""
	I0729 18:30:35.848161   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.848172   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:35.848179   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:35.848240   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:35.895786   78080 cri.go:89] found id: ""
	I0729 18:30:35.895817   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.895829   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:35.895837   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:35.895911   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:35.936753   78080 cri.go:89] found id: ""
	I0729 18:30:35.936780   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.936787   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:35.936794   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:35.936848   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:35.971321   78080 cri.go:89] found id: ""
	I0729 18:30:35.971349   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.971358   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:35.971371   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:35.971434   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:36.018702   78080 cri.go:89] found id: ""
	I0729 18:30:36.018725   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.018732   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:36.018737   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:36.018792   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:36.054829   78080 cri.go:89] found id: ""
	I0729 18:30:36.054865   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.054875   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:36.054882   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:36.054948   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:36.087456   78080 cri.go:89] found id: ""
	I0729 18:30:36.087483   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.087492   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:36.087500   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:36.087512   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:36.140919   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:36.140951   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:36.155581   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:36.155614   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:36.227617   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:36.227642   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:36.227669   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:36.304610   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:36.304651   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:32.935270   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:34.935362   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:35.239258   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:37.242161   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:39.739031   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:36.043489   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:38.542041   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:38.843099   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:38.857571   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:38.857626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:38.890760   78080 cri.go:89] found id: ""
	I0729 18:30:38.890790   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.890801   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:38.890809   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:38.890884   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:38.932701   78080 cri.go:89] found id: ""
	I0729 18:30:38.932738   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.932748   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:38.932755   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:38.932812   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:38.967379   78080 cri.go:89] found id: ""
	I0729 18:30:38.967406   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.967416   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:38.967430   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:38.967490   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:39.000419   78080 cri.go:89] found id: ""
	I0729 18:30:39.000450   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.000459   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:39.000466   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:39.000528   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:39.033764   78080 cri.go:89] found id: ""
	I0729 18:30:39.033793   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.033802   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:39.033807   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:39.033857   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:39.070904   78080 cri.go:89] found id: ""
	I0729 18:30:39.070933   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.070944   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:39.070951   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:39.071010   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:39.107444   78080 cri.go:89] found id: ""
	I0729 18:30:39.107471   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.107480   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:39.107488   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:39.107549   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:39.141392   78080 cri.go:89] found id: ""
	I0729 18:30:39.141423   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.141436   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:39.141449   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:39.141464   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:39.154874   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:39.154905   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:39.229370   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:39.229396   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:39.229413   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:39.310508   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:39.310538   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:39.352547   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:39.352569   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:41.908463   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:41.922132   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:41.922209   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:41.960404   78080 cri.go:89] found id: ""
	I0729 18:30:41.960431   78080 logs.go:276] 0 containers: []
	W0729 18:30:41.960439   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:41.960444   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:41.960498   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:41.994082   78080 cri.go:89] found id: ""
	I0729 18:30:41.994110   78080 logs.go:276] 0 containers: []
	W0729 18:30:41.994117   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:41.994123   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:41.994177   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:42.030301   78080 cri.go:89] found id: ""
	I0729 18:30:42.030322   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.030330   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:42.030336   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:42.030401   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:42.064310   78080 cri.go:89] found id: ""
	I0729 18:30:42.064339   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.064349   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:42.064356   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:42.064413   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:42.097705   78080 cri.go:89] found id: ""
	I0729 18:30:42.097738   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.097748   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:42.097761   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:42.097819   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:42.133254   78080 cri.go:89] found id: ""
	I0729 18:30:42.133282   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.133292   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:42.133299   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:42.133361   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:42.170028   78080 cri.go:89] found id: ""
	I0729 18:30:42.170054   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.170063   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:42.170075   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:42.170141   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:42.205680   78080 cri.go:89] found id: ""
	I0729 18:30:42.205712   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.205723   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:42.205736   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:42.205749   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:37.442211   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:39.934866   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:41.935293   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:42.240035   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:41.041897   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:43.042300   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:42.246322   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:42.246350   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:42.300852   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:42.300884   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:42.316306   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:42.316333   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:42.389898   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:42.389920   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:42.389934   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:44.971238   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:44.984796   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:44.984846   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:45.021842   78080 cri.go:89] found id: ""
	I0729 18:30:45.021868   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.021877   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:45.021885   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:45.021958   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:45.059353   78080 cri.go:89] found id: ""
	I0729 18:30:45.059377   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.059387   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:45.059394   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:45.059456   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:45.094867   78080 cri.go:89] found id: ""
	I0729 18:30:45.094900   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.094911   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:45.094918   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:45.094974   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:45.128589   78080 cri.go:89] found id: ""
	I0729 18:30:45.128614   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.128622   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:45.128628   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:45.128671   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:45.160137   78080 cri.go:89] found id: ""
	I0729 18:30:45.160165   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.160172   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:45.160177   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:45.160228   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:45.205757   78080 cri.go:89] found id: ""
	I0729 18:30:45.205780   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.205787   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:45.205793   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:45.205840   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:45.250056   78080 cri.go:89] found id: ""
	I0729 18:30:45.250084   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.250091   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:45.250096   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:45.250179   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:45.285349   78080 cri.go:89] found id: ""
	I0729 18:30:45.285372   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.285380   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:45.285389   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:45.285401   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:45.364188   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:45.364218   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:45.412638   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:45.412660   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:45.467713   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:45.467745   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:45.483811   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:45.483835   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:45.564866   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:44.434921   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:46.934237   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:44.740648   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:47.239253   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:49.240229   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:45.043415   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:47.542757   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:49.543251   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:48.065579   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:48.079441   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:48.079511   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:48.115540   78080 cri.go:89] found id: ""
	I0729 18:30:48.115569   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.115578   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:48.115586   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:48.115670   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:48.151810   78080 cri.go:89] found id: ""
	I0729 18:30:48.151834   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.151841   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:48.151847   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:48.151913   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:48.187459   78080 cri.go:89] found id: ""
	I0729 18:30:48.187490   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.187500   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:48.187508   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:48.187568   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:48.226804   78080 cri.go:89] found id: ""
	I0729 18:30:48.226835   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.226846   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:48.226853   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:48.226916   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:48.260413   78080 cri.go:89] found id: ""
	I0729 18:30:48.260439   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.260448   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:48.260455   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:48.260517   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:48.296719   78080 cri.go:89] found id: ""
	I0729 18:30:48.296743   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.296751   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:48.296756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:48.296806   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:48.331969   78080 cri.go:89] found id: ""
	I0729 18:30:48.331995   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.332002   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:48.332008   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:48.332055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:48.370593   78080 cri.go:89] found id: ""
	I0729 18:30:48.370618   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.370626   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:48.370634   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:48.370645   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:48.410653   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:48.410679   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:48.465467   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:48.465503   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:48.480025   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:48.480053   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:48.557806   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:48.557824   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:48.557840   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:51.140743   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:51.153970   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:51.154046   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:51.187826   78080 cri.go:89] found id: ""
	I0729 18:30:51.187851   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.187862   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:51.187868   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:51.187922   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:51.226140   78080 cri.go:89] found id: ""
	I0729 18:30:51.226172   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.226182   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:51.226189   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:51.226255   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:51.262321   78080 cri.go:89] found id: ""
	I0729 18:30:51.262349   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.262357   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:51.262378   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:51.262440   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:51.295356   78080 cri.go:89] found id: ""
	I0729 18:30:51.295383   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.295395   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:51.295403   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:51.295467   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:51.328320   78080 cri.go:89] found id: ""
	I0729 18:30:51.328349   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.328361   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:51.328367   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:51.328424   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:51.364202   78080 cri.go:89] found id: ""
	I0729 18:30:51.364233   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.364242   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:51.364249   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:51.364313   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:51.405500   78080 cri.go:89] found id: ""
	I0729 18:30:51.405529   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.405538   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:51.405544   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:51.405606   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:51.443519   78080 cri.go:89] found id: ""
	I0729 18:30:51.443541   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.443548   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:51.443556   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:51.443567   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:51.495560   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:51.495599   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:51.512152   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:51.512178   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:51.590972   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:51.590992   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:51.591021   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:51.688717   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:51.688757   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:48.934577   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:51.437173   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:51.739680   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.238626   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:52.044254   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.545288   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.256011   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:54.270602   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:54.270653   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:54.311547   78080 cri.go:89] found id: ""
	I0729 18:30:54.311574   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.311584   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:54.311592   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:54.311655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:54.347559   78080 cri.go:89] found id: ""
	I0729 18:30:54.347591   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.347602   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:54.347610   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:54.347675   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:54.382180   78080 cri.go:89] found id: ""
	I0729 18:30:54.382205   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.382212   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:54.382217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:54.382264   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:54.415560   78080 cri.go:89] found id: ""
	I0729 18:30:54.415587   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.415594   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:54.415600   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:54.415655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:54.450313   78080 cri.go:89] found id: ""
	I0729 18:30:54.450341   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.450351   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:54.450372   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:54.450439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:54.484649   78080 cri.go:89] found id: ""
	I0729 18:30:54.484678   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.484687   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:54.484694   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:54.484741   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:54.520170   78080 cri.go:89] found id: ""
	I0729 18:30:54.520204   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.520212   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:54.520220   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:54.520270   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:54.562724   78080 cri.go:89] found id: ""
	I0729 18:30:54.562753   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.562762   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:54.562772   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:54.562788   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:54.617461   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:54.617498   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:54.630970   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:54.630993   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:54.699332   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:54.699353   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:54.699366   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:54.779240   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:54.779276   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:53.934151   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:56.434549   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:56.239554   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:58.239583   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:57.041845   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:59.042164   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:57.318673   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:57.332789   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:57.332845   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:57.370434   78080 cri.go:89] found id: ""
	I0729 18:30:57.370461   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.370486   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:57.370492   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:57.370547   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:57.420694   78080 cri.go:89] found id: ""
	I0729 18:30:57.420724   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.420735   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:57.420742   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:57.420808   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:57.469245   78080 cri.go:89] found id: ""
	I0729 18:30:57.469271   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.469282   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:57.469288   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:57.469355   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:57.524937   78080 cri.go:89] found id: ""
	I0729 18:30:57.524963   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.524970   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:57.524976   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:57.525031   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:57.566803   78080 cri.go:89] found id: ""
	I0729 18:30:57.566830   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.566840   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:57.566847   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:57.566910   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:57.602786   78080 cri.go:89] found id: ""
	I0729 18:30:57.602814   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.602821   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:57.602826   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:57.602891   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:57.639319   78080 cri.go:89] found id: ""
	I0729 18:30:57.639347   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.639355   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:57.639361   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:57.639408   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:57.672580   78080 cri.go:89] found id: ""
	I0729 18:30:57.672610   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.672621   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:57.672632   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:57.672647   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:57.751550   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:57.751572   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:57.751586   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:57.840057   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:57.840097   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:57.884698   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:57.884737   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:57.944468   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:57.944497   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:00.459605   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:00.473079   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:00.473138   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:00.508492   78080 cri.go:89] found id: ""
	I0729 18:31:00.508525   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.508536   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:00.508543   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:00.508604   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:00.544844   78080 cri.go:89] found id: ""
	I0729 18:31:00.544875   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.544886   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:00.544899   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:00.544960   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:00.578402   78080 cri.go:89] found id: ""
	I0729 18:31:00.578432   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.578443   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:00.578450   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:00.578508   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:00.611886   78080 cri.go:89] found id: ""
	I0729 18:31:00.611913   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.611922   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:00.611928   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:00.611989   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:00.649126   78080 cri.go:89] found id: ""
	I0729 18:31:00.649153   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.649162   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:00.649168   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:00.649229   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:00.686534   78080 cri.go:89] found id: ""
	I0729 18:31:00.686561   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.686571   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:00.686578   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:00.686639   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:00.718656   78080 cri.go:89] found id: ""
	I0729 18:31:00.718680   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.718690   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:00.718696   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:00.718755   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:00.752740   78080 cri.go:89] found id: ""
	I0729 18:31:00.752766   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.752776   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:00.752786   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:00.752800   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:00.804293   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:00.804323   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:00.817988   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:00.818010   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:00.892178   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:00.892210   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:00.892231   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:00.973164   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:00.973199   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:58.434888   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:00.934518   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:00.239908   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:02.240038   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:04.240420   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:01.542080   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:03.542877   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:04.036213   77627 pod_ready.go:81] duration metric: took 4m0.000109353s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" ...
	E0729 18:31:04.036235   77627 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 18:31:04.036250   77627 pod_ready.go:38] duration metric: took 4m10.564329435s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:04.036294   77627 kubeadm.go:597] duration metric: took 4m18.357564209s to restartPrimaryControlPlane
	W0729 18:31:04.036359   77627 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:31:04.036388   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:31:03.512105   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:03.526536   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:03.526602   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:03.561579   78080 cri.go:89] found id: ""
	I0729 18:31:03.561604   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.561614   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:03.561621   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:03.561681   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:03.603995   78080 cri.go:89] found id: ""
	I0729 18:31:03.604019   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.604028   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:03.604033   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:03.604079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:03.640879   78080 cri.go:89] found id: ""
	I0729 18:31:03.640902   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.640910   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:03.640917   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:03.640971   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:03.675262   78080 cri.go:89] found id: ""
	I0729 18:31:03.675288   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.675296   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:03.675302   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:03.675349   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:03.708094   78080 cri.go:89] found id: ""
	I0729 18:31:03.708128   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.708137   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:03.708142   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:03.708190   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:03.748262   78080 cri.go:89] found id: ""
	I0729 18:31:03.748287   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.748298   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:03.748304   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:03.748360   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:03.789758   78080 cri.go:89] found id: ""
	I0729 18:31:03.789788   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.789800   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:03.789806   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:03.789893   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:03.829253   78080 cri.go:89] found id: ""
	I0729 18:31:03.829280   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.829291   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:03.829299   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:03.829317   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:03.883012   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:03.883044   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:03.899264   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:03.899294   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:03.970241   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:03.970261   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:03.970274   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:04.056205   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:04.056244   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:06.604919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:06.619163   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:06.619242   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:06.656939   78080 cri.go:89] found id: ""
	I0729 18:31:06.656970   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.656982   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:06.656989   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:06.657075   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:06.692577   78080 cri.go:89] found id: ""
	I0729 18:31:06.692608   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.692624   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:06.692632   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:06.692695   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:06.730045   78080 cri.go:89] found id: ""
	I0729 18:31:06.730077   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.730088   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:06.730096   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:06.730179   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:06.771794   78080 cri.go:89] found id: ""
	I0729 18:31:06.771820   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.771830   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:06.771838   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:06.771905   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:06.806149   78080 cri.go:89] found id: ""
	I0729 18:31:06.806177   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.806187   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:06.806194   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:06.806252   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:06.851875   78080 cri.go:89] found id: ""
	I0729 18:31:06.851905   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.851923   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:06.851931   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:06.851996   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:06.890335   78080 cri.go:89] found id: ""
	I0729 18:31:06.890382   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.890393   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:06.890399   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:06.890460   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:06.928873   78080 cri.go:89] found id: ""
	I0729 18:31:06.928902   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.928912   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:06.928922   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:06.928935   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:06.944269   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:06.944295   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:07.011658   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:07.011682   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:07.011697   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:07.109899   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:07.109948   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:07.154569   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:07.154600   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:02.935054   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:05.434752   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:06.242994   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:08.738448   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:09.709101   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:09.722387   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:09.722461   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:09.760443   78080 cri.go:89] found id: ""
	I0729 18:31:09.760471   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.760481   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:09.760488   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:09.760551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:09.796177   78080 cri.go:89] found id: ""
	I0729 18:31:09.796200   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.796209   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:09.796214   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:09.796264   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:09.831955   78080 cri.go:89] found id: ""
	I0729 18:31:09.831983   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.831990   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:09.831995   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:09.832055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:09.863913   78080 cri.go:89] found id: ""
	I0729 18:31:09.863939   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.863949   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:09.863956   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:09.864014   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:09.897553   78080 cri.go:89] found id: ""
	I0729 18:31:09.897575   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.897583   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:09.897588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:09.897645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:09.935203   78080 cri.go:89] found id: ""
	I0729 18:31:09.935221   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.935228   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:09.935238   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:09.935296   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:09.971098   78080 cri.go:89] found id: ""
	I0729 18:31:09.971125   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.971135   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:09.971142   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:09.971224   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:10.006760   78080 cri.go:89] found id: ""
	I0729 18:31:10.006794   78080 logs.go:276] 0 containers: []
	W0729 18:31:10.006804   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:10.006815   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:10.006830   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:10.056037   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:10.056066   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:10.070633   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:10.070660   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:10.139953   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:10.139983   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:10.140002   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:10.220748   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:10.220781   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:07.436020   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:09.934218   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:11.934977   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:10.740109   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:13.239440   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:12.766391   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:12.779837   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:12.779889   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:12.813910   78080 cri.go:89] found id: ""
	I0729 18:31:12.813941   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.813951   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:12.813959   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:12.814008   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:12.848811   78080 cri.go:89] found id: ""
	I0729 18:31:12.848854   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.848865   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:12.848872   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:12.848927   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:12.884740   78080 cri.go:89] found id: ""
	I0729 18:31:12.884769   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.884780   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:12.884786   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:12.884833   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:12.923826   78080 cri.go:89] found id: ""
	I0729 18:31:12.923859   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.923870   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:12.923878   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:12.923930   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:12.959127   78080 cri.go:89] found id: ""
	I0729 18:31:12.959157   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.959168   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:12.959175   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:12.959245   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:12.994384   78080 cri.go:89] found id: ""
	I0729 18:31:12.994417   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.994430   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:12.994439   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:12.994506   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:13.027854   78080 cri.go:89] found id: ""
	I0729 18:31:13.027883   78080 logs.go:276] 0 containers: []
	W0729 18:31:13.027892   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:13.027897   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:13.027951   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:13.062270   78080 cri.go:89] found id: ""
	I0729 18:31:13.062300   78080 logs.go:276] 0 containers: []
	W0729 18:31:13.062310   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:13.062321   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:13.062334   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:13.114473   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:13.114500   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:13.127820   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:13.127845   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:13.195830   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:13.195848   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:13.195862   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:13.281711   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:13.281748   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:15.824456   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:15.837532   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:15.837587   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:15.871706   78080 cri.go:89] found id: ""
	I0729 18:31:15.871739   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.871750   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:15.871757   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:15.871817   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:15.906882   78080 cri.go:89] found id: ""
	I0729 18:31:15.906905   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.906912   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:15.906917   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:15.906976   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:15.943015   78080 cri.go:89] found id: ""
	I0729 18:31:15.943043   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.943057   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:15.943065   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:15.943126   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:15.980501   78080 cri.go:89] found id: ""
	I0729 18:31:15.980528   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.980536   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:15.980542   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:15.980588   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:16.014148   78080 cri.go:89] found id: ""
	I0729 18:31:16.014176   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.014183   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:16.014189   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:16.014236   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:16.048296   78080 cri.go:89] found id: ""
	I0729 18:31:16.048319   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.048326   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:16.048334   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:16.048392   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:16.084328   78080 cri.go:89] found id: ""
	I0729 18:31:16.084350   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.084358   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:16.084363   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:16.084411   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:16.120048   78080 cri.go:89] found id: ""
	I0729 18:31:16.120076   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.120084   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:16.120092   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:16.120105   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:16.173476   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:16.173503   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:16.190200   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:16.190232   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:16.261993   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:16.262014   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:16.262026   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:16.340298   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:16.340331   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:14.434706   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:16.936150   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:15.739493   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:18.239834   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:18.883152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:18.897292   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:18.897360   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:18.931276   78080 cri.go:89] found id: ""
	I0729 18:31:18.931303   78080 logs.go:276] 0 containers: []
	W0729 18:31:18.931313   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:18.931321   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:18.931379   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:18.975803   78080 cri.go:89] found id: ""
	I0729 18:31:18.975832   78080 logs.go:276] 0 containers: []
	W0729 18:31:18.975843   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:18.975853   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:18.975912   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:19.012920   78080 cri.go:89] found id: ""
	I0729 18:31:19.012951   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.012963   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:19.012970   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:19.013031   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:19.047640   78080 cri.go:89] found id: ""
	I0729 18:31:19.047667   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.047679   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:19.047687   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:19.047749   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:19.082495   78080 cri.go:89] found id: ""
	I0729 18:31:19.082522   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.082533   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:19.082540   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:19.082591   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:19.117988   78080 cri.go:89] found id: ""
	I0729 18:31:19.118016   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.118027   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:19.118034   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:19.118096   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:19.153725   78080 cri.go:89] found id: ""
	I0729 18:31:19.153753   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.153764   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:19.153771   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:19.153836   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:19.192827   78080 cri.go:89] found id: ""
	I0729 18:31:19.192857   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.192868   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:19.192879   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:19.192894   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:19.208802   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:19.208833   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:19.285877   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:19.285897   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:19.285909   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:19.366563   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:19.366598   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:19.404563   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:19.404590   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:21.958449   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:21.971674   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:21.971739   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:22.006231   78080 cri.go:89] found id: ""
	I0729 18:31:22.006253   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.006261   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:22.006266   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:22.006314   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:22.042575   78080 cri.go:89] found id: ""
	I0729 18:31:22.042599   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.042609   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:22.042616   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:22.042679   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:22.079446   78080 cri.go:89] found id: ""
	I0729 18:31:22.079471   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.079482   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:22.079489   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:22.079554   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:22.115940   78080 cri.go:89] found id: ""
	I0729 18:31:22.115967   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.115976   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:22.115984   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:22.116055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:22.149420   78080 cri.go:89] found id: ""
	I0729 18:31:22.149447   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.149456   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:22.149461   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:22.149511   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:22.182992   78080 cri.go:89] found id: ""
	I0729 18:31:22.183019   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.183027   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:22.183032   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:22.183090   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:22.218441   78080 cri.go:89] found id: ""
	I0729 18:31:22.218474   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.218487   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:22.218497   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:22.218564   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:19.434020   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:21.434806   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:20.739308   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:22.741502   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:22.263135   78080 cri.go:89] found id: ""
	I0729 18:31:22.263164   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.263173   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:22.263183   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:22.263198   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:22.319010   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:22.319049   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:22.333151   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:22.333179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:22.404661   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:22.404683   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:22.404706   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:22.488497   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:22.488537   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:25.032215   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:25.045114   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:25.045191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:25.082244   78080 cri.go:89] found id: ""
	I0729 18:31:25.082278   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.082289   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:25.082299   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:25.082388   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:25.118295   78080 cri.go:89] found id: ""
	I0729 18:31:25.118318   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.118325   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:25.118331   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:25.118395   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:25.157948   78080 cri.go:89] found id: ""
	I0729 18:31:25.157974   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.157984   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:25.157992   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:25.158054   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:25.194708   78080 cri.go:89] found id: ""
	I0729 18:31:25.194734   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.194743   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:25.194751   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:25.194813   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:25.235923   78080 cri.go:89] found id: ""
	I0729 18:31:25.235952   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.235962   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:25.235969   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:25.236032   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:25.271316   78080 cri.go:89] found id: ""
	I0729 18:31:25.271342   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.271353   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:25.271360   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:25.271422   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:25.309399   78080 cri.go:89] found id: ""
	I0729 18:31:25.309427   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.309438   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:25.309446   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:25.309503   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:25.347979   78080 cri.go:89] found id: ""
	I0729 18:31:25.348009   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.348021   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:25.348031   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:25.348046   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:25.400785   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:25.400812   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:25.413891   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:25.413915   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:25.487721   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:25.487752   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:25.487767   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:25.575500   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:25.575531   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:23.935200   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:26.434289   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:25.240961   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:27.738838   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:27.738866   77859 pod_ready.go:81] duration metric: took 4m0.005785253s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	E0729 18:31:27.738877   77859 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 18:31:27.738887   77859 pod_ready.go:38] duration metric: took 4m4.550102816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:27.738903   77859 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:31:27.738934   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:27.738991   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:27.798686   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:27.798710   77859 cri.go:89] found id: ""
	I0729 18:31:27.798717   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:27.798774   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.804769   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:27.804827   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:27.849829   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:27.849849   77859 cri.go:89] found id: ""
	I0729 18:31:27.849857   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:27.849909   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.854472   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:27.854540   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:27.891637   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:27.891659   77859 cri.go:89] found id: ""
	I0729 18:31:27.891668   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:27.891715   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.896663   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:27.896713   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:27.941948   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:27.941968   77859 cri.go:89] found id: ""
	I0729 18:31:27.941976   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:27.942018   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.946770   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:27.946821   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:27.988118   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:27.988139   77859 cri.go:89] found id: ""
	I0729 18:31:27.988147   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:27.988193   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.992474   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:27.992535   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:28.032779   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:28.032801   77859 cri.go:89] found id: ""
	I0729 18:31:28.032811   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:28.032859   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.037791   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:28.037838   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:28.081087   77859 cri.go:89] found id: ""
	I0729 18:31:28.081115   77859 logs.go:276] 0 containers: []
	W0729 18:31:28.081124   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:28.081131   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:28.081183   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:28.123906   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:28.123927   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:28.123933   77859 cri.go:89] found id: ""
	I0729 18:31:28.123940   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:28.123979   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.128737   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.133127   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:28.133201   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:28.182950   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:28.182985   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:28.241873   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:28.241914   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:28.391355   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:28.391389   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:28.447637   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:28.447671   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:28.496815   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:28.496848   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:28.540617   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:28.540651   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:29.063074   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:29.063116   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:29.123348   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:29.123378   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:29.137340   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:29.137365   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:29.174775   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:29.174810   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:29.227526   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:29.227560   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:29.281814   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:29.281844   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:28.121761   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:28.136756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:28.136813   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:28.175461   78080 cri.go:89] found id: ""
	I0729 18:31:28.175491   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.175502   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:28.175509   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:28.175567   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:28.215024   78080 cri.go:89] found id: ""
	I0729 18:31:28.215046   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.215055   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:28.215060   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:28.215122   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:28.253999   78080 cri.go:89] found id: ""
	I0729 18:31:28.254023   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.254031   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:28.254037   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:28.254090   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:28.287902   78080 cri.go:89] found id: ""
	I0729 18:31:28.287929   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.287940   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:28.287948   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:28.288006   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:28.322390   78080 cri.go:89] found id: ""
	I0729 18:31:28.322422   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.322433   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:28.322441   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:28.322500   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:28.356951   78080 cri.go:89] found id: ""
	I0729 18:31:28.356980   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.356991   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:28.356999   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:28.357060   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:28.393439   78080 cri.go:89] found id: ""
	I0729 18:31:28.393461   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.393471   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:28.393477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:28.393535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:28.431827   78080 cri.go:89] found id: ""
	I0729 18:31:28.431858   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.431868   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:28.431878   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:28.431892   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:28.509279   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:28.509315   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:28.564036   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:28.564064   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:28.626970   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:28.627000   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:28.641417   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:28.641446   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:28.713406   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:31.213942   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:31.228942   78080 kubeadm.go:597] duration metric: took 4m3.040952507s to restartPrimaryControlPlane
	W0729 18:31:31.229020   78080 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:31:31.229042   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:31:31.696335   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:31.711230   78080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:31:31.720924   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:31:31.730348   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:31:31.730378   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:31:31.730418   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:31:31.739761   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:31:31.739810   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:31:31.749021   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:31:31.758107   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:31:31.758155   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:31:31.768326   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:31:31.777347   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:31:31.777388   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:31:31.786752   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:31:31.795728   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:31:31.795776   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:31:31.805369   78080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:31:31.883678   78080 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:31:31.883751   78080 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:31:32.040989   78080 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:31:32.041127   78080 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:31:32.041259   78080 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:31:32.261525   78080 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:31:28.434784   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:30.435227   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:32.263137   78080 out.go:204]   - Generating certificates and keys ...
	I0729 18:31:32.263242   78080 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:31:32.263349   78080 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:31:32.263461   78080 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:31:32.263554   78080 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:31:32.263640   78080 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:31:32.263724   78080 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:31:32.263801   78080 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:31:32.263872   78080 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:31:32.263993   78080 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:31:32.264109   78080 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:31:32.264164   78080 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:31:32.264255   78080 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:31:32.435248   78080 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:31:32.509478   78080 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:31:32.737003   78080 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:31:33.079523   78080 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:31:33.099871   78080 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:31:33.101450   78080 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:31:33.101520   78080 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:31:33.242577   78080 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:31:31.826678   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:31.845448   77859 api_server.go:72] duration metric: took 4m16.365262679s to wait for apiserver process to appear ...
	I0729 18:31:31.845478   77859 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:31:31.845519   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:31.845568   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:31.889194   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:31.889226   77859 cri.go:89] found id: ""
	I0729 18:31:31.889236   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:31.889290   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.894167   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:31.894271   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:31.936287   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:31.936306   77859 cri.go:89] found id: ""
	I0729 18:31:31.936315   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:31.936367   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.941051   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:31.941110   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:31.978033   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:31.978057   77859 cri.go:89] found id: ""
	I0729 18:31:31.978066   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:31.978115   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.982632   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:31.982704   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:32.023792   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:32.023812   77859 cri.go:89] found id: ""
	I0729 18:31:32.023820   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:32.023875   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.028309   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:32.028367   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:32.071944   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:32.071966   77859 cri.go:89] found id: ""
	I0729 18:31:32.071975   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:32.072033   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.076171   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:32.076252   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:32.111357   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:32.111379   77859 cri.go:89] found id: ""
	I0729 18:31:32.111389   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:32.111446   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.115718   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:32.115775   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:32.168552   77859 cri.go:89] found id: ""
	I0729 18:31:32.168586   77859 logs.go:276] 0 containers: []
	W0729 18:31:32.168597   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:32.168604   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:32.168686   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:32.210002   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:32.210027   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:32.210034   77859 cri.go:89] found id: ""
	I0729 18:31:32.210043   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:32.210090   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.214929   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.220097   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:32.220121   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:32.270343   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:32.270384   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:32.329269   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:32.329303   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:32.388361   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:32.388388   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:32.430072   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:32.430108   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:32.471669   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:32.471701   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:32.508395   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:32.508424   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:32.548968   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:32.549001   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:32.605269   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:32.605306   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:32.642298   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:32.642330   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:32.659407   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:32.659431   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:32.776509   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:32.776544   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:32.832365   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:32.832395   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:35.748109   77627 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.711694865s)
	I0729 18:31:35.748184   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:35.765137   77627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:31:35.775945   77627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:31:35.786206   77627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:31:35.786232   77627 kubeadm.go:157] found existing configuration files:
	
	I0729 18:31:35.786284   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:31:35.797157   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:31:35.797218   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:31:35.810497   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:31:35.821537   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:31:35.821603   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:31:35.832985   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:31:35.842247   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:31:35.842309   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:31:35.852578   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:31:35.861798   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:31:35.861858   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:31:35.872903   77627 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:31:35.926675   77627 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 18:31:35.926872   77627 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:31:36.089002   77627 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:31:36.089179   77627 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:31:36.089310   77627 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:31:36.321844   77627 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:31:33.244436   78080 out.go:204]   - Booting up control plane ...
	I0729 18:31:33.244570   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:31:33.245677   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:31:33.249530   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:31:33.250262   78080 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:31:33.261418   78080 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:31:36.324255   77627 out.go:204]   - Generating certificates and keys ...
	I0729 18:31:36.324352   77627 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:31:36.324435   77627 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:31:36.324539   77627 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:31:36.324619   77627 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:31:36.324707   77627 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:31:36.324780   77627 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:31:36.324864   77627 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:31:36.324945   77627 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:31:36.325036   77627 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:31:36.325175   77627 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:31:36.325340   77627 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:31:36.325425   77627 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:31:36.815491   77627 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:31:36.870914   77627 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:31:36.957705   77627 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:31:37.074845   77627 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:31:37.220920   77627 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:31:37.221651   77627 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:31:37.224384   77627 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:31:32.435653   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:34.933615   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:36.935070   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:35.792366   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:31:35.801160   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 200:
	ok
	I0729 18:31:35.804043   77859 api_server.go:141] control plane version: v1.30.3
	I0729 18:31:35.804063   77859 api_server.go:131] duration metric: took 3.958578435s to wait for apiserver health ...
	I0729 18:31:35.804072   77859 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:31:35.804099   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:35.804140   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:35.845977   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:35.846003   77859 cri.go:89] found id: ""
	I0729 18:31:35.846018   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:35.846072   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.851227   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:35.851302   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:35.892117   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:35.892142   77859 cri.go:89] found id: ""
	I0729 18:31:35.892158   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:35.892215   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.897136   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:35.897216   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:35.941512   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:35.941532   77859 cri.go:89] found id: ""
	I0729 18:31:35.941541   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:35.941598   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.946072   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:35.946124   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:35.984306   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:35.984327   77859 cri.go:89] found id: ""
	I0729 18:31:35.984335   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:35.984381   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.988605   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:35.988671   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:36.031476   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:36.031504   77859 cri.go:89] found id: ""
	I0729 18:31:36.031514   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:36.031567   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.037262   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:36.037319   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:36.078054   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:36.078076   77859 cri.go:89] found id: ""
	I0729 18:31:36.078084   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:36.078134   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.082628   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:36.082693   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:36.122768   77859 cri.go:89] found id: ""
	I0729 18:31:36.122791   77859 logs.go:276] 0 containers: []
	W0729 18:31:36.122799   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:36.122804   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:36.122849   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:36.166611   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:36.166636   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:36.166642   77859 cri.go:89] found id: ""
	I0729 18:31:36.166650   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:36.166712   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.171240   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.175336   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:36.175354   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:36.233224   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:36.233255   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:36.282788   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:36.282820   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:36.675615   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:36.675660   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:36.731559   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:36.731602   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:36.747814   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:36.747845   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:36.786940   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:36.787036   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:36.829659   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:36.829694   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:36.865907   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:36.865939   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:36.908399   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:36.908427   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:37.012220   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:37.012255   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:37.063429   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:37.063463   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:37.107615   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:37.107654   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:39.655973   77859 system_pods.go:59] 8 kube-system pods found
	I0729 18:31:39.656011   77859 system_pods.go:61] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running
	I0729 18:31:39.656019   77859 system_pods.go:61] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running
	I0729 18:31:39.656025   77859 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running
	I0729 18:31:39.656032   77859 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running
	I0729 18:31:39.656037   77859 system_pods.go:61] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running
	I0729 18:31:39.656043   77859 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running
	I0729 18:31:39.656051   77859 system_pods.go:61] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:31:39.656057   77859 system_pods.go:61] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running
	I0729 18:31:39.656068   77859 system_pods.go:74] duration metric: took 3.851988452s to wait for pod list to return data ...
	I0729 18:31:39.656081   77859 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:31:39.658999   77859 default_sa.go:45] found service account: "default"
	I0729 18:31:39.659024   77859 default_sa.go:55] duration metric: took 2.935237ms for default service account to be created ...
	I0729 18:31:39.659034   77859 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:31:39.664926   77859 system_pods.go:86] 8 kube-system pods found
	I0729 18:31:39.664952   77859 system_pods.go:89] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running
	I0729 18:31:39.664959   77859 system_pods.go:89] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running
	I0729 18:31:39.664966   77859 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running
	I0729 18:31:39.664973   77859 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running
	I0729 18:31:39.664979   77859 system_pods.go:89] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running
	I0729 18:31:39.664987   77859 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running
	I0729 18:31:39.665003   77859 system_pods.go:89] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:31:39.665013   77859 system_pods.go:89] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running
	I0729 18:31:39.665025   77859 system_pods.go:126] duration metric: took 5.974722ms to wait for k8s-apps to be running ...
	I0729 18:31:39.665036   77859 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:31:39.665093   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:39.685280   77859 system_svc.go:56] duration metric: took 20.237099ms WaitForService to wait for kubelet
	I0729 18:31:39.685311   77859 kubeadm.go:582] duration metric: took 4m24.205126513s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:31:39.685336   77859 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:31:39.688419   77859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:31:39.688441   77859 node_conditions.go:123] node cpu capacity is 2
	I0729 18:31:39.688455   77859 node_conditions.go:105] duration metric: took 3.111768ms to run NodePressure ...
	I0729 18:31:39.688470   77859 start.go:241] waiting for startup goroutines ...
	I0729 18:31:39.688483   77859 start.go:246] waiting for cluster config update ...
	I0729 18:31:39.688497   77859 start.go:255] writing updated cluster config ...
	I0729 18:31:39.688830   77859 ssh_runner.go:195] Run: rm -f paused
	I0729 18:31:39.739685   77859 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:31:39.741763   77859 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-502055" cluster and "default" namespace by default
	I0729 18:31:37.226046   77627 out.go:204]   - Booting up control plane ...
	I0729 18:31:37.226163   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:31:37.227852   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:31:37.228710   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:31:37.248177   77627 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:31:37.248863   77627 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:31:37.248915   77627 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:31:37.376905   77627 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:31:37.377030   77627 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:31:37.878928   77627 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.066447ms
	I0729 18:31:37.879057   77627 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:31:38.935622   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:41.433736   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:42.880479   77627 kubeadm.go:310] [api-check] The API server is healthy after 5.001345894s
	I0729 18:31:42.892513   77627 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:31:42.910175   77627 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:31:42.948111   77627 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:31:42.948340   77627 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-409322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:31:42.966823   77627 kubeadm.go:310] [bootstrap-token] Using token: f8a98i.3r2is78gllm02lfe
	I0729 18:31:42.968170   77627 out.go:204]   - Configuring RBAC rules ...
	I0729 18:31:42.968304   77627 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:31:42.978257   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:31:42.986458   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:31:42.989744   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:31:42.992484   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:31:42.995162   77627 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:31:43.287739   77627 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:31:43.726370   77627 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:31:44.290225   77627 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:31:44.291166   77627 kubeadm.go:310] 
	I0729 18:31:44.291267   77627 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:31:44.291278   77627 kubeadm.go:310] 
	I0729 18:31:44.291392   77627 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:31:44.291401   77627 kubeadm.go:310] 
	I0729 18:31:44.291436   77627 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:31:44.291530   77627 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:31:44.291589   77627 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:31:44.291606   77627 kubeadm.go:310] 
	I0729 18:31:44.291701   77627 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:31:44.291713   77627 kubeadm.go:310] 
	I0729 18:31:44.291788   77627 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:31:44.291797   77627 kubeadm.go:310] 
	I0729 18:31:44.291860   77627 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:31:44.291954   77627 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:31:44.292052   77627 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:31:44.292070   77627 kubeadm.go:310] 
	I0729 18:31:44.292167   77627 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:31:44.292269   77627 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:31:44.292280   77627 kubeadm.go:310] 
	I0729 18:31:44.292402   77627 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f8a98i.3r2is78gllm02lfe \
	I0729 18:31:44.292543   77627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 18:31:44.292585   77627 kubeadm.go:310] 	--control-plane 
	I0729 18:31:44.292595   77627 kubeadm.go:310] 
	I0729 18:31:44.292710   77627 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:31:44.292732   77627 kubeadm.go:310] 
	I0729 18:31:44.292836   77627 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f8a98i.3r2is78gllm02lfe \
	I0729 18:31:44.293015   77627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 18:31:44.293440   77627 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:31:44.293500   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:31:44.293512   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:31:44.295432   77627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:31:44.296845   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:31:44.308178   77627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:31:44.334403   77627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:31:44.334542   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:44.334562   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-409322 minikube.k8s.io/updated_at=2024_07_29T18_31_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=embed-certs-409322 minikube.k8s.io/primary=true
	I0729 18:31:44.366345   77627 ops.go:34] apiserver oom_adj: -16
	I0729 18:31:44.537970   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:43.433884   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:45.434714   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:45.039020   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:45.538831   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:46.038700   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:46.538761   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.038725   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.538100   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:48.038309   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:48.538896   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:49.039011   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:49.538333   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.435067   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:49.934658   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:50.038548   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:50.538590   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:51.038131   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:51.538253   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.038599   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.538827   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:53.038077   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:53.538860   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:54.038530   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:54.538952   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.433783   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:54.434442   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:56.434864   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:55.038263   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:55.538050   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:56.038006   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:56.538079   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.038042   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.538146   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.696274   77627 kubeadm.go:1113] duration metric: took 13.36179604s to wait for elevateKubeSystemPrivileges
	I0729 18:31:57.696308   77627 kubeadm.go:394] duration metric: took 5m12.066483926s to StartCluster
	I0729 18:31:57.696324   77627 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:31:57.696406   77627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:31:57.698195   77627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:31:57.698479   77627 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:31:57.698592   77627 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:31:57.698674   77627 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-409322"
	I0729 18:31:57.698688   77627 addons.go:69] Setting metrics-server=true in profile "embed-certs-409322"
	I0729 18:31:57.698695   77627 addons.go:69] Setting default-storageclass=true in profile "embed-certs-409322"
	I0729 18:31:57.698714   77627 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-409322"
	I0729 18:31:57.698719   77627 addons.go:234] Setting addon metrics-server=true in "embed-certs-409322"
	W0729 18:31:57.698723   77627 addons.go:243] addon storage-provisioner should already be in state true
	W0729 18:31:57.698729   77627 addons.go:243] addon metrics-server should already be in state true
	I0729 18:31:57.698733   77627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-409322"
	I0729 18:31:57.698755   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.698676   77627 config.go:182] Loaded profile config "embed-certs-409322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:31:57.698760   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.699157   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699169   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699207   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.699170   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699229   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.699209   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.700201   77627 out.go:177] * Verifying Kubernetes components...
	I0729 18:31:57.701577   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:31:57.715130   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44873
	I0729 18:31:57.715156   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I0729 18:31:57.715708   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.715759   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.716320   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.716329   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.716344   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.716345   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.716666   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.716672   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.716868   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.717251   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.717283   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.717715   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41041
	I0729 18:31:57.718172   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.718684   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.718709   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.719111   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.719630   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.719670   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.720815   77627 addons.go:234] Setting addon default-storageclass=true in "embed-certs-409322"
	W0729 18:31:57.720839   77627 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:31:57.720870   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.721233   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.721264   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.733757   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0729 18:31:57.734325   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.735372   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.735397   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.735736   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.735928   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.735939   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0729 18:31:57.736244   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.736923   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.736942   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.737318   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.737664   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.739761   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.740354   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.741103   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
	I0729 18:31:57.741489   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.741979   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.741999   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.742296   77627 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:31:57.742348   77627 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:31:57.742400   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.743411   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.743443   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.743498   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:31:57.743515   77627 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:31:57.743537   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.743682   77627 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:31:57.743697   77627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:31:57.743711   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.748331   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.748743   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.748759   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.748941   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.748986   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.749110   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.749290   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.749423   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.749638   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.749650   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.749671   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.749834   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.749940   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.750051   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.760794   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I0729 18:31:57.761136   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.761574   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.761585   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.761954   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.762133   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.764344   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.764532   77627 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:31:57.764541   77627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:31:57.764555   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.767111   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.767485   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.767498   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.767625   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.767763   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.767875   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.768004   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.965911   77627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:31:57.986557   77627 node_ready.go:35] waiting up to 6m0s for node "embed-certs-409322" to be "Ready" ...
	I0729 18:31:57.995790   77627 node_ready.go:49] node "embed-certs-409322" has status "Ready":"True"
	I0729 18:31:57.995809   77627 node_ready.go:38] duration metric: took 9.222398ms for node "embed-certs-409322" to be "Ready" ...
	I0729 18:31:57.995817   77627 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:58.003516   77627 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace to be "Ready" ...
	I0729 18:31:58.047522   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:31:58.053274   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:31:58.053290   77627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:31:58.074101   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:31:58.074127   77627 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:31:58.088159   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:31:58.097491   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:31:58.097518   77627 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:31:58.125335   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:31:58.628396   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628425   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628466   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628480   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628847   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.628909   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.628918   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.628936   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.628946   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628955   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628914   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.628898   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.629017   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.629046   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.629268   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.629281   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.630616   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.630636   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.630649   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.660029   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.660061   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.660339   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.660358   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.975389   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.975414   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.975721   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.975740   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.975750   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.975760   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.976034   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.976051   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.976063   77627 addons.go:475] Verifying addon metrics-server=true in "embed-certs-409322"
	I0729 18:31:58.978172   77627 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 18:31:58.979568   77627 addons.go:510] duration metric: took 1.280977366s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 18:31:58.935700   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:00.935984   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:00.009825   77627 pod_ready.go:92] pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:00.009846   77627 pod_ready.go:81] duration metric: took 2.006300447s for pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:00.009855   77627 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:02.016463   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:04.515885   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:03.432654   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:05.434708   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:06.517308   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:09.016256   77627 pod_ready.go:92] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.016276   77627 pod_ready.go:81] duration metric: took 9.006414116s for pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.016287   77627 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.021639   77627 pod_ready.go:92] pod "etcd-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.021661   77627 pod_ready.go:81] duration metric: took 5.365088ms for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.021672   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.026599   77627 pod_ready.go:92] pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.026618   77627 pod_ready.go:81] duration metric: took 4.939458ms for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.026629   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.031994   77627 pod_ready.go:92] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.032009   77627 pod_ready.go:81] duration metric: took 5.37307ms for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.032020   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kxf5z" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.036180   77627 pod_ready.go:92] pod "kube-proxy-kxf5z" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.036196   77627 pod_ready.go:81] duration metric: took 4.16934ms for pod "kube-proxy-kxf5z" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.036205   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.414950   77627 pod_ready.go:92] pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.414973   77627 pod_ready.go:81] duration metric: took 378.76116ms for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.414981   77627 pod_ready.go:38] duration metric: took 11.419116871s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:09.414995   77627 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:32:09.415042   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:32:09.434210   77627 api_server.go:72] duration metric: took 11.735691998s to wait for apiserver process to appear ...
	I0729 18:32:09.434240   77627 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:32:09.434260   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:32:09.439755   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I0729 18:32:09.440612   77627 api_server.go:141] control plane version: v1.30.3
	I0729 18:32:09.440631   77627 api_server.go:131] duration metric: took 6.382802ms to wait for apiserver health ...
	I0729 18:32:09.440640   77627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:32:09.617533   77627 system_pods.go:59] 9 kube-system pods found
	I0729 18:32:09.617564   77627 system_pods.go:61] "coredns-7db6d8ff4d-wpnfg" [687cbc8f-370a-4b72-bc1c-6ae36efe890e] Running
	I0729 18:32:09.617569   77627 system_pods.go:61] "coredns-7db6d8ff4d-wztpj" [1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7] Running
	I0729 18:32:09.617572   77627 system_pods.go:61] "etcd-embed-certs-409322" [68de54c3-7d47-4e79-a064-08b013b1d910] Running
	I0729 18:32:09.617575   77627 system_pods.go:61] "kube-apiserver-embed-certs-409322" [dc1a0568-ef7c-493f-91fb-7438456daf6d] Running
	I0729 18:32:09.617579   77627 system_pods.go:61] "kube-controller-manager-embed-certs-409322" [da715e8c-2437-487b-b4e0-c93af2f079f7] Running
	I0729 18:32:09.617582   77627 system_pods.go:61] "kube-proxy-kxf5z" [74ed1812-b3bf-429d-b8f1-bdccb3415fb5] Running
	I0729 18:32:09.617584   77627 system_pods.go:61] "kube-scheduler-embed-certs-409322" [188cf21a-9a8a-45de-9a91-9e593626ce6d] Running
	I0729 18:32:09.617591   77627 system_pods.go:61] "metrics-server-569cc877fc-6q4nl" [57dc61cc-7490-49e5-9d03-c81aa5d25aea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:32:09.617596   77627 system_pods.go:61] "storage-provisioner" [b0b1e31d-9b5c-4e82-aea7-56184832c053] Running
	I0729 18:32:09.617604   77627 system_pods.go:74] duration metric: took 176.958452ms to wait for pod list to return data ...
	I0729 18:32:09.617614   77627 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:32:09.813846   77627 default_sa.go:45] found service account: "default"
	I0729 18:32:09.813871   77627 default_sa.go:55] duration metric: took 196.249412ms for default service account to be created ...
	I0729 18:32:09.813886   77627 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:32:10.019167   77627 system_pods.go:86] 9 kube-system pods found
	I0729 18:32:10.019199   77627 system_pods.go:89] "coredns-7db6d8ff4d-wpnfg" [687cbc8f-370a-4b72-bc1c-6ae36efe890e] Running
	I0729 18:32:10.019208   77627 system_pods.go:89] "coredns-7db6d8ff4d-wztpj" [1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7] Running
	I0729 18:32:10.019214   77627 system_pods.go:89] "etcd-embed-certs-409322" [68de54c3-7d47-4e79-a064-08b013b1d910] Running
	I0729 18:32:10.019220   77627 system_pods.go:89] "kube-apiserver-embed-certs-409322" [dc1a0568-ef7c-493f-91fb-7438456daf6d] Running
	I0729 18:32:10.019227   77627 system_pods.go:89] "kube-controller-manager-embed-certs-409322" [da715e8c-2437-487b-b4e0-c93af2f079f7] Running
	I0729 18:32:10.019233   77627 system_pods.go:89] "kube-proxy-kxf5z" [74ed1812-b3bf-429d-b8f1-bdccb3415fb5] Running
	I0729 18:32:10.019239   77627 system_pods.go:89] "kube-scheduler-embed-certs-409322" [188cf21a-9a8a-45de-9a91-9e593626ce6d] Running
	I0729 18:32:10.019249   77627 system_pods.go:89] "metrics-server-569cc877fc-6q4nl" [57dc61cc-7490-49e5-9d03-c81aa5d25aea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:32:10.019257   77627 system_pods.go:89] "storage-provisioner" [b0b1e31d-9b5c-4e82-aea7-56184832c053] Running
	I0729 18:32:10.019267   77627 system_pods.go:126] duration metric: took 205.375742ms to wait for k8s-apps to be running ...
	I0729 18:32:10.019278   77627 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:32:10.019326   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:32:10.034632   77627 system_svc.go:56] duration metric: took 15.345747ms WaitForService to wait for kubelet
	I0729 18:32:10.034659   77627 kubeadm.go:582] duration metric: took 12.336145267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:32:10.034687   77627 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:32:10.214205   77627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:32:10.214240   77627 node_conditions.go:123] node cpu capacity is 2
	I0729 18:32:10.214255   77627 node_conditions.go:105] duration metric: took 179.559492ms to run NodePressure ...
	I0729 18:32:10.214269   77627 start.go:241] waiting for startup goroutines ...
	I0729 18:32:10.214279   77627 start.go:246] waiting for cluster config update ...
	I0729 18:32:10.214297   77627 start.go:255] writing updated cluster config ...
	I0729 18:32:10.214639   77627 ssh_runner.go:195] Run: rm -f paused
	I0729 18:32:10.264858   77627 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:32:10.266718   77627 out.go:177] * Done! kubectl is now configured to use "embed-certs-409322" cluster and "default" namespace by default
	I0729 18:32:07.934519   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:10.434593   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:13.262907   78080 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:32:13.263487   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:13.263679   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:12.934686   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:13.928481   77394 pod_ready.go:81] duration metric: took 4m0.00080059s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" ...
	E0729 18:32:13.928509   77394 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 18:32:13.928528   77394 pod_ready.go:38] duration metric: took 4m10.042077465s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:13.928554   77394 kubeadm.go:597] duration metric: took 4m18.205651497s to restartPrimaryControlPlane
	W0729 18:32:13.928623   77394 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:32:13.928649   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:32:18.264261   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:18.264554   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:28.265190   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:28.265433   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:40.226240   77394 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.297571665s)
	I0729 18:32:40.226316   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:32:40.243407   77394 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:32:40.254946   77394 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:32:40.264608   77394 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:32:40.264631   77394 kubeadm.go:157] found existing configuration files:
	
	I0729 18:32:40.264675   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:32:40.274180   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:32:40.274231   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:32:40.283752   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:32:40.293163   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:32:40.293232   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:32:40.302533   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:32:40.311972   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:32:40.312024   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:32:40.321513   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:32:40.330546   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:32:40.330599   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:32:40.340190   77394 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:32:40.389517   77394 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 18:32:40.389592   77394 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:32:40.508682   77394 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:32:40.508783   77394 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:32:40.508859   77394 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 18:32:40.517673   77394 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:32:40.520623   77394 out.go:204]   - Generating certificates and keys ...
	I0729 18:32:40.520726   77394 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:32:40.520824   77394 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:32:40.520893   77394 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:32:40.520961   77394 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:32:40.521045   77394 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:32:40.521094   77394 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:32:40.521171   77394 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:32:40.521254   77394 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:32:40.521357   77394 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:32:40.521475   77394 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:32:40.521535   77394 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:32:40.521606   77394 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:32:40.615870   77394 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:32:40.837902   77394 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:32:40.924418   77394 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:32:41.068573   77394 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:32:41.287201   77394 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:32:41.287991   77394 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:32:41.293523   77394 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:32:41.295211   77394 out.go:204]   - Booting up control plane ...
	I0729 18:32:41.295329   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:32:41.295455   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:32:41.295560   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:32:41.317802   77394 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:32:41.324522   77394 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:32:41.324589   77394 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:32:41.463007   77394 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:32:41.463116   77394 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:32:41.982144   77394 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 519.208408ms
	I0729 18:32:41.982263   77394 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:32:46.983564   77394 kubeadm.go:310] [api-check] The API server is healthy after 5.001335599s
	I0729 18:32:46.999811   77394 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:32:47.018194   77394 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:32:47.051359   77394 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:32:47.051564   77394 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-888056 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:32:47.062615   77394 kubeadm.go:310] [bootstrap-token] Using token: a14u5x.5d4oe8yqdl9tiifc
	I0729 18:32:47.064051   77394 out.go:204]   - Configuring RBAC rules ...
	I0729 18:32:47.064187   77394 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:32:47.071856   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:32:47.084985   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:32:47.088622   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:32:47.091797   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:32:47.096194   77394 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:32:47.391394   77394 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:32:47.834314   77394 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:32:48.394665   77394 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:32:48.394689   77394 kubeadm.go:310] 
	I0729 18:32:48.394763   77394 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:32:48.394797   77394 kubeadm.go:310] 
	I0729 18:32:48.394928   77394 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:32:48.394941   77394 kubeadm.go:310] 
	I0729 18:32:48.394979   77394 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:32:48.395058   77394 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:32:48.395126   77394 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:32:48.395141   77394 kubeadm.go:310] 
	I0729 18:32:48.395221   77394 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:32:48.395230   77394 kubeadm.go:310] 
	I0729 18:32:48.395297   77394 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:32:48.395306   77394 kubeadm.go:310] 
	I0729 18:32:48.395374   77394 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:32:48.395467   77394 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:32:48.395554   77394 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:32:48.395563   77394 kubeadm.go:310] 
	I0729 18:32:48.395652   77394 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:32:48.395766   77394 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:32:48.395778   77394 kubeadm.go:310] 
	I0729 18:32:48.395886   77394 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a14u5x.5d4oe8yqdl9tiifc \
	I0729 18:32:48.396030   77394 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 18:32:48.396062   77394 kubeadm.go:310] 	--control-plane 
	I0729 18:32:48.396071   77394 kubeadm.go:310] 
	I0729 18:32:48.396191   77394 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:32:48.396200   77394 kubeadm.go:310] 
	I0729 18:32:48.396276   77394 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a14u5x.5d4oe8yqdl9tiifc \
	I0729 18:32:48.396393   77394 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 18:32:48.397540   77394 kubeadm.go:310] W0729 18:32:40.358164    2949 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 18:32:48.397921   77394 kubeadm.go:310] W0729 18:32:40.359840    2949 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 18:32:48.398071   77394 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:32:48.398090   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:32:48.398099   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:32:48.399641   77394 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:32:48.266531   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:48.266736   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:48.400846   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:32:48.412594   77394 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:32:48.434792   77394 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:32:48.434872   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:48.434907   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-888056 minikube.k8s.io/updated_at=2024_07_29T18_32_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=no-preload-888056 minikube.k8s.io/primary=true
	I0729 18:32:48.672892   77394 ops.go:34] apiserver oom_adj: -16
	I0729 18:32:48.673144   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:49.173811   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:49.673775   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:50.173717   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:50.673774   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:51.174068   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:51.673565   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:52.173431   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:52.673602   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:53.173912   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:53.315565   77394 kubeadm.go:1113] duration metric: took 4.880757535s to wait for elevateKubeSystemPrivileges
	I0729 18:32:53.315609   77394 kubeadm.go:394] duration metric: took 4m57.645527986s to StartCluster
	I0729 18:32:53.315633   77394 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:32:53.315736   77394 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:32:53.317360   77394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:32:53.317579   77394 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:32:53.317669   77394 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:32:53.317784   77394 addons.go:69] Setting storage-provisioner=true in profile "no-preload-888056"
	I0729 18:32:53.317820   77394 addons.go:234] Setting addon storage-provisioner=true in "no-preload-888056"
	I0729 18:32:53.317817   77394 addons.go:69] Setting default-storageclass=true in profile "no-preload-888056"
	W0729 18:32:53.317835   77394 addons.go:243] addon storage-provisioner should already be in state true
	I0729 18:32:53.317840   77394 config.go:182] Loaded profile config "no-preload-888056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:32:53.317836   77394 addons.go:69] Setting metrics-server=true in profile "no-preload-888056"
	I0729 18:32:53.317861   77394 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-888056"
	I0729 18:32:53.317878   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.317882   77394 addons.go:234] Setting addon metrics-server=true in "no-preload-888056"
	W0729 18:32:53.317892   77394 addons.go:243] addon metrics-server should already be in state true
	I0729 18:32:53.317927   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.318302   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318308   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318334   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.318345   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.318301   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318441   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.319022   77394 out.go:177] * Verifying Kubernetes components...
	I0729 18:32:53.320383   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:32:53.335666   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0729 18:32:53.336170   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.336860   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.336896   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.337301   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.338104   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I0729 18:32:53.338137   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40655
	I0729 18:32:53.338545   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.338559   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.338595   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.338614   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.339076   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.339094   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.339163   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.339188   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.339510   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.340089   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.340126   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.340346   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.340557   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.344286   77394 addons.go:234] Setting addon default-storageclass=true in "no-preload-888056"
	W0729 18:32:53.344307   77394 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:32:53.344335   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.344702   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.344727   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.356006   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33765
	I0729 18:32:53.356613   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.357135   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.357159   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.357517   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.357604   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0729 18:32:53.357752   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.358011   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.358472   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.358490   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.358898   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.359110   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.359546   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.360493   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.361662   77394 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:32:53.362464   77394 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:32:53.363294   77394 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:32:53.363311   77394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:32:53.363331   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.364170   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:32:53.364182   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I0729 18:32:53.364186   77394 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:32:53.364205   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.364560   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.365040   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.365061   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.365515   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.365963   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.365983   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.367883   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.368768   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369264   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.369284   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369576   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.369591   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369858   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.369964   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.370009   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.370102   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.370169   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.370198   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.370317   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.370344   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.382571   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0729 18:32:53.382940   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.383311   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.383336   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.383748   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.383946   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.385570   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.385761   77394 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:32:53.385775   77394 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:32:53.385792   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.388411   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.388756   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.388774   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.389017   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.389193   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.389350   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.389463   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.585542   77394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:32:53.645556   77394 node_ready.go:35] waiting up to 6m0s for node "no-preload-888056" to be "Ready" ...
	I0729 18:32:53.657965   77394 node_ready.go:49] node "no-preload-888056" has status "Ready":"True"
	I0729 18:32:53.657997   77394 node_ready.go:38] duration metric: took 12.408834ms for node "no-preload-888056" to be "Ready" ...
	I0729 18:32:53.658010   77394 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:53.673068   77394 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:53.724224   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:32:53.724248   77394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:32:53.763536   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:32:53.774123   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:32:53.812615   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:32:53.812639   77394 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:32:53.945274   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:32:53.945303   77394 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:32:54.107180   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:32:54.184354   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.184379   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.184699   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.184748   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.184762   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.184776   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.184786   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.185015   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.185043   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.185077   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.244759   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.244781   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.245108   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.245156   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.245169   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.782604   77394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.008443119s)
	I0729 18:32:54.782663   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.782676   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.782990   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.783010   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.783020   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.783028   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.783265   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.783283   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946051   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.946074   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.946396   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.946418   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946430   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.946439   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.946680   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.946698   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946710   77394 addons.go:475] Verifying addon metrics-server=true in "no-preload-888056"
	I0729 18:32:54.948362   77394 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 18:32:54.949821   77394 addons.go:510] duration metric: took 1.632153415s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 18:32:55.679655   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:57.680175   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:33:00.179877   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:33:01.180068   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.180094   77394 pod_ready.go:81] duration metric: took 7.506992362s for pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.180106   77394 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.185742   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.185760   77394 pod_ready.go:81] duration metric: took 5.647157ms for pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.185769   77394 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.190056   77394 pod_ready.go:92] pod "etcd-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.190077   77394 pod_ready.go:81] duration metric: took 4.30181ms for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.190085   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.194255   77394 pod_ready.go:92] pod "kube-apiserver-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.194273   77394 pod_ready.go:81] duration metric: took 4.182006ms for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.194284   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.199056   77394 pod_ready.go:92] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.199072   77394 pod_ready.go:81] duration metric: took 4.779158ms for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.199081   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-94ff9" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.578279   77394 pod_ready.go:92] pod "kube-proxy-94ff9" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.578299   77394 pod_ready.go:81] duration metric: took 379.211109ms for pod "kube-proxy-94ff9" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.578308   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:02.378184   77394 pod_ready.go:92] pod "kube-scheduler-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:02.378205   77394 pod_ready.go:81] duration metric: took 799.890202ms for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:02.378212   77394 pod_ready.go:38] duration metric: took 8.720189182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:33:02.378226   77394 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:33:02.378282   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:33:02.396023   77394 api_server.go:72] duration metric: took 9.07841179s to wait for apiserver process to appear ...
	I0729 18:33:02.396050   77394 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:33:02.396070   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:33:02.403736   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 200:
	ok
	I0729 18:33:02.404828   77394 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 18:33:02.404850   77394 api_server.go:131] duration metric: took 8.793481ms to wait for apiserver health ...
	I0729 18:33:02.404858   77394 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:33:02.580656   77394 system_pods.go:59] 9 kube-system pods found
	I0729 18:33:02.580683   77394 system_pods.go:61] "coredns-5cfdc65f69-bbh6c" [66b43af3-78eb-437f-81d7-eedb4cc34349] Running
	I0729 18:33:02.580687   77394 system_pods.go:61] "coredns-5cfdc65f69-j9ddw" [679f8750-86aa-4e00-8291-6996b54b1930] Running
	I0729 18:33:02.580691   77394 system_pods.go:61] "etcd-no-preload-888056" [abcd648d-659a-4f02-a769-f2222eaac945] Running
	I0729 18:33:02.580695   77394 system_pods.go:61] "kube-apiserver-no-preload-888056" [99a48803-06b1-44a6-a0cc-f28f2ba7235f] Running
	I0729 18:33:02.580699   77394 system_pods.go:61] "kube-controller-manager-no-preload-888056" [6bb3d64c-9fef-41ee-a68d-170fac01dec5] Running
	I0729 18:33:02.580702   77394 system_pods.go:61] "kube-proxy-94ff9" [dd06899e-3d54-4b71-bda6-f8c6d06ce100] Running
	I0729 18:33:02.580704   77394 system_pods.go:61] "kube-scheduler-no-preload-888056" [a1b60226-df5e-45ce-8382-a8d277278129] Running
	I0729 18:33:02.580710   77394 system_pods.go:61] "metrics-server-78fcd8795b-9qqmj" [45bbbaf3-cf3e-4db1-9eec-693425bc5dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:33:02.580714   77394 system_pods.go:61] "storage-provisioner" [0aacb67c-abea-47fb-a2f1-f1245e68599a] Running
	I0729 18:33:02.580721   77394 system_pods.go:74] duration metric: took 175.857868ms to wait for pod list to return data ...
	I0729 18:33:02.580728   77394 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:33:02.778962   77394 default_sa.go:45] found service account: "default"
	I0729 18:33:02.778987   77394 default_sa.go:55] duration metric: took 198.250326ms for default service account to be created ...
	I0729 18:33:02.778995   77394 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:33:02.981123   77394 system_pods.go:86] 9 kube-system pods found
	I0729 18:33:02.981159   77394 system_pods.go:89] "coredns-5cfdc65f69-bbh6c" [66b43af3-78eb-437f-81d7-eedb4cc34349] Running
	I0729 18:33:02.981166   77394 system_pods.go:89] "coredns-5cfdc65f69-j9ddw" [679f8750-86aa-4e00-8291-6996b54b1930] Running
	I0729 18:33:02.981175   77394 system_pods.go:89] "etcd-no-preload-888056" [abcd648d-659a-4f02-a769-f2222eaac945] Running
	I0729 18:33:02.981181   77394 system_pods.go:89] "kube-apiserver-no-preload-888056" [99a48803-06b1-44a6-a0cc-f28f2ba7235f] Running
	I0729 18:33:02.981186   77394 system_pods.go:89] "kube-controller-manager-no-preload-888056" [6bb3d64c-9fef-41ee-a68d-170fac01dec5] Running
	I0729 18:33:02.981190   77394 system_pods.go:89] "kube-proxy-94ff9" [dd06899e-3d54-4b71-bda6-f8c6d06ce100] Running
	I0729 18:33:02.981196   77394 system_pods.go:89] "kube-scheduler-no-preload-888056" [a1b60226-df5e-45ce-8382-a8d277278129] Running
	I0729 18:33:02.981206   77394 system_pods.go:89] "metrics-server-78fcd8795b-9qqmj" [45bbbaf3-cf3e-4db1-9eec-693425bc5dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:33:02.981214   77394 system_pods.go:89] "storage-provisioner" [0aacb67c-abea-47fb-a2f1-f1245e68599a] Running
	I0729 18:33:02.981228   77394 system_pods.go:126] duration metric: took 202.226569ms to wait for k8s-apps to be running ...
	I0729 18:33:02.981239   77394 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:33:02.981290   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:33:02.999134   77394 system_svc.go:56] duration metric: took 17.878004ms WaitForService to wait for kubelet
	I0729 18:33:02.999169   77394 kubeadm.go:582] duration metric: took 9.681562891s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:33:02.999187   77394 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:33:03.179246   77394 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:33:03.179274   77394 node_conditions.go:123] node cpu capacity is 2
	I0729 18:33:03.179286   77394 node_conditions.go:105] duration metric: took 180.093491ms to run NodePressure ...
	I0729 18:33:03.179312   77394 start.go:241] waiting for startup goroutines ...
	I0729 18:33:03.179322   77394 start.go:246] waiting for cluster config update ...
	I0729 18:33:03.179344   77394 start.go:255] writing updated cluster config ...
	I0729 18:33:03.179658   77394 ssh_runner.go:195] Run: rm -f paused
	I0729 18:33:03.228664   77394 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 18:33:03.230706   77394 out.go:177] * Done! kubectl is now configured to use "no-preload-888056" cluster and "default" namespace by default
	I0729 18:33:28.269122   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:33:28.269375   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:33:28.269399   78080 kubeadm.go:310] 
	I0729 18:33:28.269433   78080 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:33:28.269471   78080 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:33:28.269480   78080 kubeadm.go:310] 
	I0729 18:33:28.269508   78080 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:33:28.269541   78080 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:33:28.269686   78080 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:33:28.269698   78080 kubeadm.go:310] 
	I0729 18:33:28.269846   78080 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:33:28.269902   78080 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:33:28.269946   78080 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:33:28.269969   78080 kubeadm.go:310] 
	I0729 18:33:28.270132   78080 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:33:28.270246   78080 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:33:28.270258   78080 kubeadm.go:310] 
	I0729 18:33:28.270434   78080 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:33:28.270567   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:33:28.270674   78080 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:33:28.270774   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:33:28.270784   78080 kubeadm.go:310] 
	I0729 18:33:28.271347   78080 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:33:28.271428   78080 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:33:28.271503   78080 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 18:33:28.271650   78080 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 18:33:28.271713   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:33:28.743675   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:33:28.759228   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:33:28.768522   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:33:28.768546   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:33:28.768593   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:33:28.777423   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:33:28.777481   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:33:28.786450   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:33:28.795335   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:33:28.795386   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:33:28.804519   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:33:28.813137   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:33:28.813193   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:33:28.822053   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:33:28.830463   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:33:28.830513   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:33:28.839818   78080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:33:29.066010   78080 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:35:25.197434   78080 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:35:25.197566   78080 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 18:35:25.199476   78080 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:35:25.199554   78080 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:35:25.199667   78080 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:35:25.199800   78080 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:35:25.199937   78080 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:35:25.200054   78080 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:35:25.201801   78080 out.go:204]   - Generating certificates and keys ...
	I0729 18:35:25.201875   78080 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:35:25.201944   78080 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:35:25.202073   78080 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:35:25.202136   78080 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:35:25.202231   78080 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:35:25.202287   78080 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:35:25.202339   78080 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:35:25.202426   78080 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:35:25.202492   78080 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:35:25.202560   78080 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:35:25.202603   78080 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:35:25.202692   78080 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:35:25.202779   78080 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:35:25.202863   78080 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:35:25.202962   78080 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:35:25.203070   78080 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:35:25.203213   78080 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:35:25.203289   78080 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:35:25.203323   78080 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:35:25.203381   78080 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:35:25.204837   78080 out.go:204]   - Booting up control plane ...
	I0729 18:35:25.204920   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:35:25.204985   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:35:25.205053   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:35:25.205146   78080 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:35:25.205274   78080 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:35:25.205316   78080 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:35:25.205379   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.205591   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.205658   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.205828   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.205926   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206142   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206204   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206411   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206488   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206683   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206698   78080 kubeadm.go:310] 
	I0729 18:35:25.206755   78080 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:35:25.206817   78080 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:35:25.206827   78080 kubeadm.go:310] 
	I0729 18:35:25.206860   78080 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:35:25.206890   78080 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:35:25.206975   78080 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:35:25.206985   78080 kubeadm.go:310] 
	I0729 18:35:25.207099   78080 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:35:25.207134   78080 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:35:25.207167   78080 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:35:25.207177   78080 kubeadm.go:310] 
	I0729 18:35:25.207289   78080 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:35:25.207403   78080 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:35:25.207412   78080 kubeadm.go:310] 
	I0729 18:35:25.207532   78080 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:35:25.207640   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:35:25.207754   78080 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:35:25.207821   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:35:25.207854   78080 kubeadm.go:310] 
	I0729 18:35:25.207886   78080 kubeadm.go:394] duration metric: took 7m57.080498205s to StartCluster
	I0729 18:35:25.207923   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:35:25.207983   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:35:25.251803   78080 cri.go:89] found id: ""
	I0729 18:35:25.251841   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.251852   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:35:25.251859   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:35:25.251920   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:35:25.287842   78080 cri.go:89] found id: ""
	I0729 18:35:25.287877   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.287895   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:35:25.287903   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:35:25.287967   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:35:25.324546   78080 cri.go:89] found id: ""
	I0729 18:35:25.324573   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.324582   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:35:25.324588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:35:25.324634   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:35:25.375723   78080 cri.go:89] found id: ""
	I0729 18:35:25.375746   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.375753   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:35:25.375759   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:35:25.375812   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:35:25.412580   78080 cri.go:89] found id: ""
	I0729 18:35:25.412604   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.412612   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:35:25.412617   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:35:25.412664   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:35:25.449360   78080 cri.go:89] found id: ""
	I0729 18:35:25.449397   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.449406   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:35:25.449413   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:35:25.449464   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:35:25.485655   78080 cri.go:89] found id: ""
	I0729 18:35:25.485687   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.485698   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:35:25.485705   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:35:25.485769   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:35:25.521752   78080 cri.go:89] found id: ""
	I0729 18:35:25.521776   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.521783   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:35:25.521792   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:35:25.521808   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:35:25.562894   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:35:25.562922   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:35:25.623879   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:35:25.623912   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:35:25.647315   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:35:25.647341   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:35:25.744827   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:35:25.744850   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:35:25.744865   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0729 18:35:25.849394   78080 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 18:35:25.849445   78080 out.go:239] * 
	W0729 18:35:25.849520   78080 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:35:25.849558   78080 out.go:239] * 
	W0729 18:35:25.850438   78080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:35:25.853770   78080 out.go:177] 
	W0729 18:35:25.854982   78080 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:35:25.855035   78080 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 18:35:25.855060   78080 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 18:35:25.856444   78080 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.192283919Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278472192256512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=347ea220-b0f7-4931-8068-234bb144179b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.193189285Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56aa0855-c1e4-4d65-b8bf-e43752d1757d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.193251542Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56aa0855-c1e4-4d65-b8bf-e43752d1757d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.193434161Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b7c5ae6c21d8f1d4f5a76a7b2b00dee58ba86c016e42d5cd9ce48f176d17841a,PodSandboxId:82026e4cbebb5982849c683056b2d4c9434dad95444553ce97c5cbae66293adc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277919246250692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wztpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7,},Annotations:map[string]string{io.kubernetes.container.hash: 9db37acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fed3fc2d0d695c51ec9efec220330db6860a605adc586d580af4c01807311ff,PodSandboxId:04fd5d7fe81c81684f1b7141c3faef454f9405126ae412a4854a6188fcfb611b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277919205954883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpnfg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 687cbc8f-370a-4b72-bc1c-6ae36efe890e,},Annotations:map[string]string{io.kubernetes.container.hash: d69d4181,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e402548bbd184a3a7181b5b5ca203f0a39d4bbd2f8d1914859e6a313e39f3e2b,PodSandboxId:ce759b42015c7e603f9ebd6843740a9d52aae7491d7e28054e8fb0fb266bbd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722277919153466747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0b1e31d-9b5c-4e82-aea7-56184832c053,},Annotations:map[string]string{io.kubernetes.container.hash: 5369b1b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2ce2ac925a3585ef4fcd922a7c109fe2a4d0d04b242caab76127ae6deb93e4,PodSandboxId:f72958198ab02cacdfc9e0d6c64c0c78c7cfc66f1c62d82ad78cf21d5cfa247e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722277917569074128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxf5z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ed1812-b3bf-429d-b8f1-bdccb3415fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b2e8647d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37921ddb40291c04357e83617a73e854a3aafeff689969b204c711a6d7ae42fc,PodSandboxId:7a83e374fcfcad04cf8591c3f238d286e1dce8a7dc19cd687620753f3cbba4ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277898349454609,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092843ca16e3768154d9eaefe813d4c4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2635bb0eb62d0c9d915e1779b6f1295ca78236982bffa8667c094b32b7ef83d1,PodSandboxId:a52ce0bf9b3060f373ba4f8f1f53f58dfb0392645383900cbcb929df59ac830c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277898341865986,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f630b5e9caeb1a971fb8d4ec7f20523,},Annotations:map[string]string{io.kubernetes.container.hash: 4d95585c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b4971b286aa874f2e68c05351d5ab5e550733aa9de2cf7f91f6ee982e33501,PodSandboxId:bf85cb0dd5956d55683ad1694e3566c2eb05aaa501aec3eed08d8c988b9af21b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277898334151449,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e31ec24c194f37219d0b834f527350d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555efbbd128acb28965dc1362a9e9e494a071469264ca202038e1c023a8e83d5,PodSandboxId:9711aadb24fa2551861a07b8e7ba700abac903a5e499e2290da5f8281a2d5db6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277898237354843,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85711970cfb58ff1e43c65ebe2b0ea9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a911c7f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56aa0855-c1e4-4d65-b8bf-e43752d1757d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.236269667Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9bc00b91-b6bd-4486-8d29-99e1257e068d name=/runtime.v1.RuntimeService/Version
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.236358225Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9bc00b91-b6bd-4486-8d29-99e1257e068d name=/runtime.v1.RuntimeService/Version
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.237771401Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b27c4a2c-d816-4ed0-b2e0-39478a464e48 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.238344725Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278472238320408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b27c4a2c-d816-4ed0-b2e0-39478a464e48 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.239010986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fab81a5d-07ab-4d1f-ae43-be59b04ca09a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.239078592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fab81a5d-07ab-4d1f-ae43-be59b04ca09a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.239261722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b7c5ae6c21d8f1d4f5a76a7b2b00dee58ba86c016e42d5cd9ce48f176d17841a,PodSandboxId:82026e4cbebb5982849c683056b2d4c9434dad95444553ce97c5cbae66293adc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277919246250692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wztpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7,},Annotations:map[string]string{io.kubernetes.container.hash: 9db37acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fed3fc2d0d695c51ec9efec220330db6860a605adc586d580af4c01807311ff,PodSandboxId:04fd5d7fe81c81684f1b7141c3faef454f9405126ae412a4854a6188fcfb611b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277919205954883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpnfg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 687cbc8f-370a-4b72-bc1c-6ae36efe890e,},Annotations:map[string]string{io.kubernetes.container.hash: d69d4181,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e402548bbd184a3a7181b5b5ca203f0a39d4bbd2f8d1914859e6a313e39f3e2b,PodSandboxId:ce759b42015c7e603f9ebd6843740a9d52aae7491d7e28054e8fb0fb266bbd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722277919153466747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0b1e31d-9b5c-4e82-aea7-56184832c053,},Annotations:map[string]string{io.kubernetes.container.hash: 5369b1b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2ce2ac925a3585ef4fcd922a7c109fe2a4d0d04b242caab76127ae6deb93e4,PodSandboxId:f72958198ab02cacdfc9e0d6c64c0c78c7cfc66f1c62d82ad78cf21d5cfa247e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722277917569074128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxf5z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ed1812-b3bf-429d-b8f1-bdccb3415fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b2e8647d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37921ddb40291c04357e83617a73e854a3aafeff689969b204c711a6d7ae42fc,PodSandboxId:7a83e374fcfcad04cf8591c3f238d286e1dce8a7dc19cd687620753f3cbba4ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277898349454609,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092843ca16e3768154d9eaefe813d4c4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2635bb0eb62d0c9d915e1779b6f1295ca78236982bffa8667c094b32b7ef83d1,PodSandboxId:a52ce0bf9b3060f373ba4f8f1f53f58dfb0392645383900cbcb929df59ac830c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277898341865986,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f630b5e9caeb1a971fb8d4ec7f20523,},Annotations:map[string]string{io.kubernetes.container.hash: 4d95585c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b4971b286aa874f2e68c05351d5ab5e550733aa9de2cf7f91f6ee982e33501,PodSandboxId:bf85cb0dd5956d55683ad1694e3566c2eb05aaa501aec3eed08d8c988b9af21b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277898334151449,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e31ec24c194f37219d0b834f527350d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555efbbd128acb28965dc1362a9e9e494a071469264ca202038e1c023a8e83d5,PodSandboxId:9711aadb24fa2551861a07b8e7ba700abac903a5e499e2290da5f8281a2d5db6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277898237354843,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85711970cfb58ff1e43c65ebe2b0ea9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a911c7f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fab81a5d-07ab-4d1f-ae43-be59b04ca09a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.275943965Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d0ec473-6f89-4637-922b-f77791c32c4d name=/runtime.v1.RuntimeService/Version
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.276164980Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d0ec473-6f89-4637-922b-f77791c32c4d name=/runtime.v1.RuntimeService/Version
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.278677442Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49a30c27-56ce-4a22-a2a8-8a20842029c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.279198977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278472279176347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49a30c27-56ce-4a22-a2a8-8a20842029c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.279804763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85110e63-abea-473c-b660-1a7be5afc698 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.279874006Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85110e63-abea-473c-b660-1a7be5afc698 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.280120797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b7c5ae6c21d8f1d4f5a76a7b2b00dee58ba86c016e42d5cd9ce48f176d17841a,PodSandboxId:82026e4cbebb5982849c683056b2d4c9434dad95444553ce97c5cbae66293adc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277919246250692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wztpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7,},Annotations:map[string]string{io.kubernetes.container.hash: 9db37acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fed3fc2d0d695c51ec9efec220330db6860a605adc586d580af4c01807311ff,PodSandboxId:04fd5d7fe81c81684f1b7141c3faef454f9405126ae412a4854a6188fcfb611b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277919205954883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpnfg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 687cbc8f-370a-4b72-bc1c-6ae36efe890e,},Annotations:map[string]string{io.kubernetes.container.hash: d69d4181,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e402548bbd184a3a7181b5b5ca203f0a39d4bbd2f8d1914859e6a313e39f3e2b,PodSandboxId:ce759b42015c7e603f9ebd6843740a9d52aae7491d7e28054e8fb0fb266bbd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722277919153466747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0b1e31d-9b5c-4e82-aea7-56184832c053,},Annotations:map[string]string{io.kubernetes.container.hash: 5369b1b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2ce2ac925a3585ef4fcd922a7c109fe2a4d0d04b242caab76127ae6deb93e4,PodSandboxId:f72958198ab02cacdfc9e0d6c64c0c78c7cfc66f1c62d82ad78cf21d5cfa247e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722277917569074128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxf5z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ed1812-b3bf-429d-b8f1-bdccb3415fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b2e8647d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37921ddb40291c04357e83617a73e854a3aafeff689969b204c711a6d7ae42fc,PodSandboxId:7a83e374fcfcad04cf8591c3f238d286e1dce8a7dc19cd687620753f3cbba4ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277898349454609,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092843ca16e3768154d9eaefe813d4c4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2635bb0eb62d0c9d915e1779b6f1295ca78236982bffa8667c094b32b7ef83d1,PodSandboxId:a52ce0bf9b3060f373ba4f8f1f53f58dfb0392645383900cbcb929df59ac830c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277898341865986,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f630b5e9caeb1a971fb8d4ec7f20523,},Annotations:map[string]string{io.kubernetes.container.hash: 4d95585c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b4971b286aa874f2e68c05351d5ab5e550733aa9de2cf7f91f6ee982e33501,PodSandboxId:bf85cb0dd5956d55683ad1694e3566c2eb05aaa501aec3eed08d8c988b9af21b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277898334151449,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e31ec24c194f37219d0b834f527350d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555efbbd128acb28965dc1362a9e9e494a071469264ca202038e1c023a8e83d5,PodSandboxId:9711aadb24fa2551861a07b8e7ba700abac903a5e499e2290da5f8281a2d5db6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277898237354843,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85711970cfb58ff1e43c65ebe2b0ea9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a911c7f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=85110e63-abea-473c-b660-1a7be5afc698 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.312660309Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93dc22d1-5703-4a07-a310-a508c749bdaf name=/runtime.v1.RuntimeService/Version
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.312746006Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93dc22d1-5703-4a07-a310-a508c749bdaf name=/runtime.v1.RuntimeService/Version
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.314103511Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=875e68b8-ec99-41cc-b1f4-f58c2d5f13c8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.314504578Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278472314481237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=875e68b8-ec99-41cc-b1f4-f58c2d5f13c8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.314956378Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=12869807-390c-4942-992f-963090897531 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.315067086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=12869807-390c-4942-992f-963090897531 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:12 embed-certs-409322 crio[736]: time="2024-07-29 18:41:12.315264826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b7c5ae6c21d8f1d4f5a76a7b2b00dee58ba86c016e42d5cd9ce48f176d17841a,PodSandboxId:82026e4cbebb5982849c683056b2d4c9434dad95444553ce97c5cbae66293adc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277919246250692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wztpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7,},Annotations:map[string]string{io.kubernetes.container.hash: 9db37acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fed3fc2d0d695c51ec9efec220330db6860a605adc586d580af4c01807311ff,PodSandboxId:04fd5d7fe81c81684f1b7141c3faef454f9405126ae412a4854a6188fcfb611b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277919205954883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpnfg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 687cbc8f-370a-4b72-bc1c-6ae36efe890e,},Annotations:map[string]string{io.kubernetes.container.hash: d69d4181,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e402548bbd184a3a7181b5b5ca203f0a39d4bbd2f8d1914859e6a313e39f3e2b,PodSandboxId:ce759b42015c7e603f9ebd6843740a9d52aae7491d7e28054e8fb0fb266bbd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722277919153466747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0b1e31d-9b5c-4e82-aea7-56184832c053,},Annotations:map[string]string{io.kubernetes.container.hash: 5369b1b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2ce2ac925a3585ef4fcd922a7c109fe2a4d0d04b242caab76127ae6deb93e4,PodSandboxId:f72958198ab02cacdfc9e0d6c64c0c78c7cfc66f1c62d82ad78cf21d5cfa247e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722277917569074128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxf5z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ed1812-b3bf-429d-b8f1-bdccb3415fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b2e8647d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37921ddb40291c04357e83617a73e854a3aafeff689969b204c711a6d7ae42fc,PodSandboxId:7a83e374fcfcad04cf8591c3f238d286e1dce8a7dc19cd687620753f3cbba4ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277898349454609,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092843ca16e3768154d9eaefe813d4c4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2635bb0eb62d0c9d915e1779b6f1295ca78236982bffa8667c094b32b7ef83d1,PodSandboxId:a52ce0bf9b3060f373ba4f8f1f53f58dfb0392645383900cbcb929df59ac830c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277898341865986,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f630b5e9caeb1a971fb8d4ec7f20523,},Annotations:map[string]string{io.kubernetes.container.hash: 4d95585c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b4971b286aa874f2e68c05351d5ab5e550733aa9de2cf7f91f6ee982e33501,PodSandboxId:bf85cb0dd5956d55683ad1694e3566c2eb05aaa501aec3eed08d8c988b9af21b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277898334151449,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e31ec24c194f37219d0b834f527350d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555efbbd128acb28965dc1362a9e9e494a071469264ca202038e1c023a8e83d5,PodSandboxId:9711aadb24fa2551861a07b8e7ba700abac903a5e499e2290da5f8281a2d5db6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277898237354843,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85711970cfb58ff1e43c65ebe2b0ea9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a911c7f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=12869807-390c-4942-992f-963090897531 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b7c5ae6c21d8f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   82026e4cbebb5       coredns-7db6d8ff4d-wztpj
	2fed3fc2d0d69       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   04fd5d7fe81c8       coredns-7db6d8ff4d-wpnfg
	e402548bbd184       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   ce759b42015c7       storage-provisioner
	bc2ce2ac925a3       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   f72958198ab02       kube-proxy-kxf5z
	37921ddb40291       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   7a83e374fcfca       kube-scheduler-embed-certs-409322
	2635bb0eb62d0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   a52ce0bf9b306       etcd-embed-certs-409322
	88b4971b286aa       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   bf85cb0dd5956       kube-controller-manager-embed-certs-409322
	555efbbd128ac       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   9711aadb24fa2       kube-apiserver-embed-certs-409322
	
	
	==> coredns [2fed3fc2d0d695c51ec9efec220330db6860a605adc586d580af4c01807311ff] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b7c5ae6c21d8f1d4f5a76a7b2b00dee58ba86c016e42d5cd9ce48f176d17841a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-409322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-409322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=embed-certs-409322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T18_31_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:31:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-409322
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:41:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:37:10 +0000   Mon, 29 Jul 2024 18:31:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:37:10 +0000   Mon, 29 Jul 2024 18:31:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:37:10 +0000   Mon, 29 Jul 2024 18:31:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:37:10 +0000   Mon, 29 Jul 2024 18:31:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    embed-certs-409322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 77f2a8d4ed7d4c0f9d36bd1e29b0175a
	  System UUID:                77f2a8d4-ed7d-4c0f-9d36-bd1e29b0175a
	  Boot ID:                    ab577673-01e5-4ce5-b335-2d04fd2b473f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-wpnfg                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m15s
	  kube-system                 coredns-7db6d8ff4d-wztpj                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m15s
	  kube-system                 etcd-embed-certs-409322                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m29s
	  kube-system                 kube-apiserver-embed-certs-409322             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m29s
	  kube-system                 kube-controller-manager-embed-certs-409322    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m29s
	  kube-system                 kube-proxy-kxf5z                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m15s
	  kube-system                 kube-scheduler-embed-certs-409322             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                 metrics-server-569cc877fc-6q4nl               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m14s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m14s  kube-proxy       
	  Normal  Starting                 9m29s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m29s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m29s  kubelet          Node embed-certs-409322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m29s  kubelet          Node embed-certs-409322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m29s  kubelet          Node embed-certs-409322 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m16s  node-controller  Node embed-certs-409322 event: Registered Node embed-certs-409322 in Controller
	
	
	==> dmesg <==
	[  +0.049912] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039411] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.774653] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.424535] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.605559] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.855547] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.058726] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075565] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.178114] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.149484] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.283176] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[  +4.339128] systemd-fstab-generator[816]: Ignoring "noauto" option for root device
	[  +0.060783] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.160800] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +5.624007] kauditd_printk_skb: 97 callbacks suppressed
	[Jul29 18:27] kauditd_printk_skb: 84 callbacks suppressed
	[Jul29 18:31] kauditd_printk_skb: 10 callbacks suppressed
	[  +1.556850] systemd-fstab-generator[3623]: Ignoring "noauto" option for root device
	[  +6.046141] systemd-fstab-generator[3947]: Ignoring "noauto" option for root device
	[  +0.086239] kauditd_printk_skb: 53 callbacks suppressed
	[ +14.081215] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.316583] systemd-fstab-generator[4242]: Ignoring "noauto" option for root device
	[Jul29 18:32] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [2635bb0eb62d0c9d915e1779b6f1295ca78236982bffa8667c094b32b7ef83d1] <==
	{"level":"info","ts":"2024-07-29T18:31:38.746427Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T18:31:38.746616Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ded7f9817c909548","initial-advertise-peer-urls":["https://192.168.39.58:2380"],"listen-peer-urls":["https://192.168.39.58:2380"],"advertise-client-urls":["https://192.168.39.58:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.58:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T18:31:38.746662Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T18:31:38.746776Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.58:2380"}
	{"level":"info","ts":"2024-07-29T18:31:38.746809Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.58:2380"}
	{"level":"info","ts":"2024-07-29T18:31:38.74929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 switched to configuration voters=(16057577330948740424)"}
	{"level":"info","ts":"2024-07-29T18:31:38.749512Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"91c640bc00cd2aea","local-member-id":"ded7f9817c909548","added-peer-id":"ded7f9817c909548","added-peer-peer-urls":["https://192.168.39.58:2380"]}
	{"level":"info","ts":"2024-07-29T18:31:39.392501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T18:31:39.392642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T18:31:39.392681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 received MsgPreVoteResp from ded7f9817c909548 at term 1"}
	{"level":"info","ts":"2024-07-29T18:31:39.392698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T18:31:39.392754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 received MsgVoteResp from ded7f9817c909548 at term 2"}
	{"level":"info","ts":"2024-07-29T18:31:39.392782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T18:31:39.392791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ded7f9817c909548 elected leader ded7f9817c909548 at term 2"}
	{"level":"info","ts":"2024-07-29T18:31:39.397374Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:31:39.400475Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ded7f9817c909548","local-member-attributes":"{Name:embed-certs-409322 ClientURLs:[https://192.168.39.58:2379]}","request-path":"/0/members/ded7f9817c909548/attributes","cluster-id":"91c640bc00cd2aea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:31:39.400789Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:31:39.405456Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:31:39.405704Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T18:31:39.405735Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T18:31:39.40915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T18:31:39.420797Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"91c640bc00cd2aea","local-member-id":"ded7f9817c909548","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:31:39.420902Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:31:39.420924Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:31:39.440313Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.58:2379"}
	
	
	==> kernel <==
	 18:41:12 up 14 min,  0 users,  load average: 0.12, 0.13, 0.12
	Linux embed-certs-409322 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [555efbbd128acb28965dc1362a9e9e494a071469264ca202038e1c023a8e83d5] <==
	I0729 18:34:59.628655       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:36:41.080223       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:36:41.080473       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 18:36:42.080724       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:36:42.080873       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 18:36:42.080901       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:36:42.081094       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:36:42.081197       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 18:36:42.082394       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:37:42.082068       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:37:42.082305       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 18:37:42.082382       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:37:42.083219       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:37:42.083301       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 18:37:42.083394       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:39:42.083212       1 handler_proxy.go:93] no RequestInfo found in the context
	W0729 18:39:42.083602       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:39:42.083661       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 18:39:42.083691       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0729 18:39:42.083776       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 18:39:42.084956       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [88b4971b286aa874f2e68c05351d5ab5e550733aa9de2cf7f91f6ee982e33501] <==
	I0729 18:35:27.071754       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:35:56.616382       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:35:57.078721       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:36:26.621259       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:36:27.087827       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:36:56.627566       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:36:57.095162       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:37:26.634730       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:37:27.102962       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:37:56.640190       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:37:57.110737       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 18:38:07.617079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="318.11µs"
	I0729 18:38:18.621341       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="133.285µs"
	E0729 18:38:26.646311       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:38:27.118112       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:38:56.652423       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:38:57.126363       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:39:26.657345       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:39:27.138797       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:39:56.662303       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:39:57.150486       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:40:26.669277       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:40:27.159378       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:40:56.674869       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:40:57.171087       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [bc2ce2ac925a3585ef4fcd922a7c109fe2a4d0d04b242caab76127ae6deb93e4] <==
	I0729 18:31:57.863830       1 server_linux.go:69] "Using iptables proxy"
	I0729 18:31:57.873512       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.58"]
	I0729 18:31:57.936856       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:31:57.936919       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:31:57.937015       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:31:57.941544       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:31:57.941742       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:31:57.941772       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:31:57.947209       1 config.go:192] "Starting service config controller"
	I0729 18:31:57.947285       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:31:57.947413       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:31:57.947530       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:31:57.948765       1 config.go:319] "Starting node config controller"
	I0729 18:31:57.950921       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:31:58.047953       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:31:58.048063       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:31:58.051924       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [37921ddb40291c04357e83617a73e854a3aafeff689969b204c711a6d7ae42fc] <==
	W0729 18:31:41.968751       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 18:31:41.968802       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 18:31:41.988224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 18:31:41.988309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 18:31:42.032666       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 18:31:42.032845       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 18:31:42.171536       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 18:31:42.171584       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 18:31:42.257675       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 18:31:42.257922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 18:31:42.301498       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 18:31:42.301593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 18:31:42.366773       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 18:31:42.367234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 18:31:42.424857       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 18:31:42.425126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 18:31:42.425617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 18:31:42.425655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 18:31:42.437416       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 18:31:42.437554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 18:31:42.453696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 18:31:42.453817       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 18:31:42.567547       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 18:31:42.567685       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 18:31:44.821584       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 18:38:43 embed-certs-409322 kubelet[3954]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:38:43 embed-certs-409322 kubelet[3954]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:38:43 embed-certs-409322 kubelet[3954]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:38:47 embed-certs-409322 kubelet[3954]: E0729 18:38:47.599680    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:39:01 embed-certs-409322 kubelet[3954]: E0729 18:39:01.599396    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:39:14 embed-certs-409322 kubelet[3954]: E0729 18:39:14.599212    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:39:26 embed-certs-409322 kubelet[3954]: E0729 18:39:26.599190    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:39:38 embed-certs-409322 kubelet[3954]: E0729 18:39:38.599407    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:39:43 embed-certs-409322 kubelet[3954]: E0729 18:39:43.646619    3954 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:39:43 embed-certs-409322 kubelet[3954]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:39:43 embed-certs-409322 kubelet[3954]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:39:43 embed-certs-409322 kubelet[3954]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:39:43 embed-certs-409322 kubelet[3954]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:39:53 embed-certs-409322 kubelet[3954]: E0729 18:39:53.599160    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:40:04 embed-certs-409322 kubelet[3954]: E0729 18:40:04.599372    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:40:19 embed-certs-409322 kubelet[3954]: E0729 18:40:19.599759    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:40:30 embed-certs-409322 kubelet[3954]: E0729 18:40:30.599762    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:40:43 embed-certs-409322 kubelet[3954]: E0729 18:40:43.647345    3954 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:40:43 embed-certs-409322 kubelet[3954]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:40:43 embed-certs-409322 kubelet[3954]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:40:43 embed-certs-409322 kubelet[3954]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:40:43 embed-certs-409322 kubelet[3954]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:40:44 embed-certs-409322 kubelet[3954]: E0729 18:40:44.600887    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:40:58 embed-certs-409322 kubelet[3954]: E0729 18:40:58.599368    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:41:10 embed-certs-409322 kubelet[3954]: E0729 18:41:10.600828    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	
	
	==> storage-provisioner [e402548bbd184a3a7181b5b5ca203f0a39d4bbd2f8d1914859e6a313e39f3e2b] <==
	I0729 18:31:59.393488       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 18:31:59.453244       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 18:31:59.453436       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 18:31:59.523367       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 18:31:59.523533       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-409322_c83607fa-9136-4286-b325-60043990567d!
	I0729 18:31:59.542521       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0cd27038-3604-4007-bef6-da9bfed0b48f", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-409322_c83607fa-9136-4286-b325-60043990567d became leader
	I0729 18:31:59.623723       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-409322_c83607fa-9136-4286-b325-60043990567d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-409322 -n embed-certs-409322
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-409322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-6q4nl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-409322 describe pod metrics-server-569cc877fc-6q4nl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-409322 describe pod metrics-server-569cc877fc-6q4nl: exit status 1 (68.329315ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-6q4nl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-409322 describe pod metrics-server-569cc877fc-6q4nl: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0729 18:33:29.677011   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 18:33:34.527119   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:34:15.528225   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
E0729 18:34:50.312697   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
E0729 18:34:57.573206   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-888056 -n no-preload-888056
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 18:42:03.74673979 +0000 UTC m=+6380.414108089
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-888056 -n no-preload-888056
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-888056 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-888056 logs -n 25: (2.117420493s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-729010 sudo cat                              | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo find                             | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo crio                             | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-729010                                       | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	| delete  | -p                                                     | disable-driver-mounts-603863 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | disable-driver-mounts-603863                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:19 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-888056             | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-888056                                   | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-409322            | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-502055  | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC |                     |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-386663        | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-888056                  | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-888056 --memory=2200                     | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:33 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-409322                 | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-502055       | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:31 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386663             | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:22:47
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:22:47.218965   78080 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:22:47.219209   78080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:47.219217   78080 out.go:304] Setting ErrFile to fd 2...
	I0729 18:22:47.219222   78080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:47.219370   78080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:22:47.219863   78080 out.go:298] Setting JSON to false
	I0729 18:22:47.220726   78080 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7519,"bootTime":1722269848,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:22:47.220777   78080 start.go:139] virtualization: kvm guest
	I0729 18:22:47.222804   78080 out.go:177] * [old-k8s-version-386663] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:22:47.224119   78080 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 18:22:47.224173   78080 notify.go:220] Checking for updates...
	I0729 18:22:47.226449   78080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:22:47.227676   78080 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:22:47.228809   78080 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:22:47.229914   78080 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:22:47.230906   78080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:22:47.232363   78080 config.go:182] Loaded profile config "old-k8s-version-386663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:22:47.232750   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:47.232814   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:47.247542   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44723
	I0729 18:22:47.247909   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:47.248418   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:22:47.248436   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:47.248786   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:47.248965   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:22:47.250635   78080 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 18:22:47.251760   78080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:22:47.252055   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:47.252098   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:47.266291   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35843
	I0729 18:22:47.266672   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:47.267136   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:22:47.267157   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:47.267492   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:47.267662   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:22:47.303335   78080 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 18:22:47.304503   78080 start.go:297] selected driver: kvm2
	I0729 18:22:47.304513   78080 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:47.304607   78080 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:22:47.305291   78080 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:47.305360   78080 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:22:47.319918   78080 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:22:47.320315   78080 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:22:47.320341   78080 cni.go:84] Creating CNI manager for ""
	I0729 18:22:47.320349   78080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:22:47.320386   78080 start.go:340] cluster config:
	{Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:47.320480   78080 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:47.322357   78080 out.go:177] * Starting "old-k8s-version-386663" primary control-plane node in "old-k8s-version-386663" cluster
	I0729 18:22:43.378634   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:46.450644   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:47.323622   78080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:22:47.323653   78080 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:22:47.323660   78080 cache.go:56] Caching tarball of preloaded images
	I0729 18:22:47.323740   78080 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:22:47.323761   78080 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 18:22:47.323849   78080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json ...
	I0729 18:22:47.324021   78080 start.go:360] acquireMachinesLock for old-k8s-version-386663: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:22:52.530551   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:55.602731   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:01.682636   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:04.754621   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:10.834616   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:13.906688   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:19.986655   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:23.059064   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:29.138659   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:32.210758   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:38.290665   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:41.362732   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:47.442637   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:50.514656   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:56.594611   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:59.666706   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:05.746649   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:08.818685   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:14.898642   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:17.970619   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:24.050664   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:27.122664   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:33.202629   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:36.274678   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:42.354674   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:45.426704   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:51.506670   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:54.578602   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:00.658683   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:03.730663   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:09.810619   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:12.882598   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:18.962612   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:22.034673   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:28.114638   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:31.186598   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:37.266642   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:40.338599   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:46.418679   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:49.490705   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:55.570690   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:58.642719   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:04.722643   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:07.794711   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:13.874638   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:16.946806   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:19.951345   77627 start.go:364] duration metric: took 4m10.060086709s to acquireMachinesLock for "embed-certs-409322"
	I0729 18:26:19.951406   77627 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:19.951414   77627 fix.go:54] fixHost starting: 
	I0729 18:26:19.951732   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:19.951761   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:19.967602   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41827
	I0729 18:26:19.968062   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:19.968486   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:26:19.968505   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:19.968809   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:19.969009   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:19.969135   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:26:19.970757   77627 fix.go:112] recreateIfNeeded on embed-certs-409322: state=Stopped err=<nil>
	I0729 18:26:19.970784   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	W0729 18:26:19.970931   77627 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:19.972631   77627 out.go:177] * Restarting existing kvm2 VM for "embed-certs-409322" ...
	I0729 18:26:19.948656   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:19.948718   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:26:19.949066   77394 buildroot.go:166] provisioning hostname "no-preload-888056"
	I0729 18:26:19.949096   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:26:19.949286   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:26:19.951194   77394 machine.go:97] duration metric: took 4m37.435248922s to provisionDockerMachine
	I0729 18:26:19.951238   77394 fix.go:56] duration metric: took 4m37.45552986s for fixHost
	I0729 18:26:19.951246   77394 start.go:83] releasing machines lock for "no-preload-888056", held for 4m37.455571504s
	W0729 18:26:19.951284   77394 start.go:714] error starting host: provision: host is not running
	W0729 18:26:19.951381   77394 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 18:26:19.951389   77394 start.go:729] Will try again in 5 seconds ...
	I0729 18:26:19.973786   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Start
	I0729 18:26:19.973923   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring networks are active...
	I0729 18:26:19.974594   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring network default is active
	I0729 18:26:19.974930   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring network mk-embed-certs-409322 is active
	I0729 18:26:19.975500   77627 main.go:141] libmachine: (embed-certs-409322) Getting domain xml...
	I0729 18:26:19.976135   77627 main.go:141] libmachine: (embed-certs-409322) Creating domain...
	I0729 18:26:21.186491   77627 main.go:141] libmachine: (embed-certs-409322) Waiting to get IP...
	I0729 18:26:21.187403   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.187857   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.187924   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.187843   78811 retry.go:31] will retry after 218.694883ms: waiting for machine to come up
	I0729 18:26:21.408404   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.408843   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.408872   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.408795   78811 retry.go:31] will retry after 335.138992ms: waiting for machine to come up
	I0729 18:26:21.745329   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.745805   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.745828   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.745759   78811 retry.go:31] will retry after 317.831297ms: waiting for machine to come up
	I0729 18:26:22.065446   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:22.065985   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:22.066024   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:22.065948   78811 retry.go:31] will retry after 557.945634ms: waiting for machine to come up
	I0729 18:26:22.625624   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:22.626020   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:22.626047   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:22.625967   78811 retry.go:31] will retry after 739.991425ms: waiting for machine to come up
	I0729 18:26:23.368166   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:23.368523   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:23.368549   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:23.368477   78811 retry.go:31] will retry after 878.16479ms: waiting for machine to come up
	I0729 18:26:24.248467   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:24.248871   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:24.248895   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:24.248813   78811 retry.go:31] will retry after 1.022542608s: waiting for machine to come up
	I0729 18:26:24.952911   77394 start.go:360] acquireMachinesLock for no-preload-888056: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:26:25.273470   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:25.273886   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:25.273913   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:25.273829   78811 retry.go:31] will retry after 1.313344307s: waiting for machine to come up
	I0729 18:26:26.589378   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:26.589805   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:26.589852   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:26.589769   78811 retry.go:31] will retry after 1.553795128s: waiting for machine to come up
	I0729 18:26:28.145271   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:28.145680   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:28.145704   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:28.145643   78811 retry.go:31] will retry after 1.859680601s: waiting for machine to come up
	I0729 18:26:30.007588   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:30.007988   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:30.008018   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:30.007937   78811 retry.go:31] will retry after 1.754805493s: waiting for machine to come up
	I0729 18:26:31.764527   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:31.765077   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:31.765107   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:31.765030   78811 retry.go:31] will retry after 2.769383357s: waiting for machine to come up
	I0729 18:26:34.536479   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:34.536972   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:34.537007   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:34.536921   78811 retry.go:31] will retry after 3.355218512s: waiting for machine to come up
	I0729 18:26:39.563371   77859 start.go:364] duration metric: took 3m59.712120998s to acquireMachinesLock for "default-k8s-diff-port-502055"
	I0729 18:26:39.563440   77859 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:39.563452   77859 fix.go:54] fixHost starting: 
	I0729 18:26:39.563871   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:39.563914   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:39.580545   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0729 18:26:39.580962   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:39.581492   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:26:39.581518   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:39.581864   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:39.582096   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:39.582290   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:26:39.583857   77859 fix.go:112] recreateIfNeeded on default-k8s-diff-port-502055: state=Stopped err=<nil>
	I0729 18:26:39.583883   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	W0729 18:26:39.584062   77859 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:39.586281   77859 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-502055" ...
	I0729 18:26:39.587651   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Start
	I0729 18:26:39.587814   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring networks are active...
	I0729 18:26:39.588499   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring network default is active
	I0729 18:26:39.588864   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring network mk-default-k8s-diff-port-502055 is active
	I0729 18:26:39.589616   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Getting domain xml...
	I0729 18:26:39.590433   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Creating domain...
	I0729 18:26:37.896070   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.896640   77627 main.go:141] libmachine: (embed-certs-409322) Found IP for machine: 192.168.39.58
	I0729 18:26:37.896664   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has current primary IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.896670   77627 main.go:141] libmachine: (embed-certs-409322) Reserving static IP address...
	I0729 18:26:37.897129   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "embed-certs-409322", mac: "52:54:00:22:9f:57", ip: "192.168.39.58"} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:37.897157   77627 main.go:141] libmachine: (embed-certs-409322) Reserved static IP address: 192.168.39.58
	I0729 18:26:37.897173   77627 main.go:141] libmachine: (embed-certs-409322) DBG | skip adding static IP to network mk-embed-certs-409322 - found existing host DHCP lease matching {name: "embed-certs-409322", mac: "52:54:00:22:9f:57", ip: "192.168.39.58"}
	I0729 18:26:37.897189   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Getting to WaitForSSH function...
	I0729 18:26:37.897206   77627 main.go:141] libmachine: (embed-certs-409322) Waiting for SSH to be available...
	I0729 18:26:37.899216   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.899595   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:37.899616   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.899785   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Using SSH client type: external
	I0729 18:26:37.899808   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa (-rw-------)
	I0729 18:26:37.899845   77627 main.go:141] libmachine: (embed-certs-409322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:26:37.899858   77627 main.go:141] libmachine: (embed-certs-409322) DBG | About to run SSH command:
	I0729 18:26:37.899872   77627 main.go:141] libmachine: (embed-certs-409322) DBG | exit 0
	I0729 18:26:38.026619   77627 main.go:141] libmachine: (embed-certs-409322) DBG | SSH cmd err, output: <nil>: 
	I0729 18:26:38.027028   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetConfigRaw
	I0729 18:26:38.027621   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:38.030532   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.030963   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.030989   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.031243   77627 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/config.json ...
	I0729 18:26:38.031413   77627 machine.go:94] provisionDockerMachine start ...
	I0729 18:26:38.031437   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:38.031642   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.033867   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.034218   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.034251   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.034380   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.034545   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.034682   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.034807   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.034992   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.035175   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.035185   77627 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:26:38.142565   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:26:38.142595   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.142842   77627 buildroot.go:166] provisioning hostname "embed-certs-409322"
	I0729 18:26:38.142872   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.143071   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.145625   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.145951   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.145974   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.146217   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.146423   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.146577   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.146730   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.146861   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.147046   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.147065   77627 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-409322 && echo "embed-certs-409322" | sudo tee /etc/hostname
	I0729 18:26:38.264341   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-409322
	
	I0729 18:26:38.264368   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.266846   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.267144   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.267171   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.267328   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.267488   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.267660   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.267757   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.267936   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.268106   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.268122   77627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-409322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-409322/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-409322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:26:38.383748   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:38.383779   77627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:26:38.383805   77627 buildroot.go:174] setting up certificates
	I0729 18:26:38.383817   77627 provision.go:84] configureAuth start
	I0729 18:26:38.383827   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.384110   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:38.386936   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.387320   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.387348   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.387508   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.389550   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.389871   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.389910   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.389978   77627 provision.go:143] copyHostCerts
	I0729 18:26:38.390039   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:26:38.390052   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:26:38.390137   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:26:38.390257   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:26:38.390268   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:26:38.390308   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:26:38.390406   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:26:38.390416   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:26:38.390456   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:26:38.390526   77627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.embed-certs-409322 san=[127.0.0.1 192.168.39.58 embed-certs-409322 localhost minikube]
	I0729 18:26:38.903674   77627 provision.go:177] copyRemoteCerts
	I0729 18:26:38.903758   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:26:38.903791   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.906662   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.906984   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.907018   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.907171   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.907360   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.907543   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.907667   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:38.992373   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 18:26:39.016465   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:26:39.039598   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:26:39.062415   77627 provision.go:87] duration metric: took 678.589364ms to configureAuth
	I0729 18:26:39.062443   77627 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:26:39.062622   77627 config.go:182] Loaded profile config "embed-certs-409322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:26:39.062696   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.065308   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.065703   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.065728   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.065902   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.066076   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.066244   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.066403   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.066553   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:39.066743   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:39.066759   77627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:26:39.326153   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:26:39.326176   77627 machine.go:97] duration metric: took 1.29475208s to provisionDockerMachine
	I0729 18:26:39.326186   77627 start.go:293] postStartSetup for "embed-certs-409322" (driver="kvm2")
	I0729 18:26:39.326195   77627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:26:39.326209   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.326603   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:26:39.326637   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.329049   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.329448   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.329476   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.329616   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.329822   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.330022   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.330186   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.413084   77627 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:26:39.417438   77627 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:26:39.417462   77627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:26:39.417535   77627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:26:39.417626   77627 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:26:39.417749   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:26:39.427256   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:39.451330   77627 start.go:296] duration metric: took 125.132889ms for postStartSetup
	I0729 18:26:39.451362   77627 fix.go:56] duration metric: took 19.499949606s for fixHost
	I0729 18:26:39.451380   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.453750   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.454047   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.454072   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.454237   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.454416   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.454570   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.454698   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.454864   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:39.455069   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:39.455080   77627 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:26:39.563211   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277599.531173461
	
	I0729 18:26:39.563238   77627 fix.go:216] guest clock: 1722277599.531173461
	I0729 18:26:39.563248   77627 fix.go:229] Guest: 2024-07-29 18:26:39.531173461 +0000 UTC Remote: 2024-07-29 18:26:39.451365859 +0000 UTC m=+269.697720486 (delta=79.807602ms)
	I0729 18:26:39.563278   77627 fix.go:200] guest clock delta is within tolerance: 79.807602ms
	I0729 18:26:39.563287   77627 start.go:83] releasing machines lock for "embed-certs-409322", held for 19.611902888s
	I0729 18:26:39.563318   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.563562   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:39.566225   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.566549   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.566575   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.566766   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567227   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567378   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567460   77627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:26:39.567501   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.567565   77627 ssh_runner.go:195] Run: cat /version.json
	I0729 18:26:39.567593   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.570113   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570330   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570536   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.570558   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570747   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.570754   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.570776   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570883   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.571004   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.571113   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.571211   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.571330   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.571438   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.571478   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.651235   77627 ssh_runner.go:195] Run: systemctl --version
	I0729 18:26:39.677383   77627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:26:39.824036   77627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:26:39.830027   77627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:26:39.830103   77627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:26:39.845939   77627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:26:39.845963   77627 start.go:495] detecting cgroup driver to use...
	I0729 18:26:39.846019   77627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:26:39.862867   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:26:39.878060   77627 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:26:39.878152   77627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:26:39.892471   77627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:26:39.906690   77627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:26:40.039725   77627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:26:40.201419   77627 docker.go:233] disabling docker service ...
	I0729 18:26:40.201489   77627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:26:40.222454   77627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:26:40.237523   77627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:26:40.371463   77627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:26:40.499676   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:26:40.514068   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:26:40.534051   77627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:26:40.534114   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.545364   77627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:26:40.545458   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.557113   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.568215   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.579433   77627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:26:40.591005   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.601933   77627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.621097   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.631960   77627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:26:40.642308   77627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:26:40.642383   77627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:26:40.656469   77627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:26:40.671251   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:26:40.784289   77627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:26:40.933837   77627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:26:40.933910   77627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:26:40.939031   77627 start.go:563] Will wait 60s for crictl version
	I0729 18:26:40.939086   77627 ssh_runner.go:195] Run: which crictl
	I0729 18:26:40.943166   77627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:26:40.985673   77627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:26:40.985753   77627 ssh_runner.go:195] Run: crio --version
	I0729 18:26:41.013973   77627 ssh_runner.go:195] Run: crio --version
	I0729 18:26:41.046080   77627 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:26:40.822462   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting to get IP...
	I0729 18:26:40.823526   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:40.823948   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:40.824000   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:40.823920   78947 retry.go:31] will retry after 262.026124ms: waiting for machine to come up
	I0729 18:26:41.087492   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.087961   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.087991   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.087913   78947 retry.go:31] will retry after 380.066984ms: waiting for machine to come up
	I0729 18:26:41.469728   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.470215   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.470244   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.470181   78947 retry.go:31] will retry after 293.069239ms: waiting for machine to come up
	I0729 18:26:41.764797   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.765277   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.765303   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.765228   78947 retry.go:31] will retry after 491.247116ms: waiting for machine to come up
	I0729 18:26:42.257741   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.258247   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.258275   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:42.258220   78947 retry.go:31] will retry after 693.832082ms: waiting for machine to come up
	I0729 18:26:42.953375   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.954146   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.954169   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:42.954051   78947 retry.go:31] will retry after 710.005115ms: waiting for machine to come up
	I0729 18:26:43.666068   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:43.666478   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:43.666504   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:43.666438   78947 retry.go:31] will retry after 1.077324053s: waiting for machine to come up
	I0729 18:26:41.047322   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:41.049993   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:41.050394   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:41.050433   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:41.050630   77627 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:26:41.054805   77627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:26:41.066926   77627 kubeadm.go:883] updating cluster {Name:embed-certs-409322 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:26:41.067053   77627 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:26:41.067115   77627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:26:41.103417   77627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:26:41.103489   77627 ssh_runner.go:195] Run: which lz4
	I0729 18:26:41.107793   77627 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:26:41.112161   77627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:26:41.112192   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:26:42.559564   77627 crio.go:462] duration metric: took 1.451801292s to copy over tarball
	I0729 18:26:42.559679   77627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:26:44.759513   77627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199801336s)
	I0729 18:26:44.759543   77627 crio.go:469] duration metric: took 2.199942615s to extract the tarball
	I0729 18:26:44.759554   77627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:26:44.744984   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:44.745450   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:44.745477   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:44.745403   78947 retry.go:31] will retry after 1.064257005s: waiting for machine to come up
	I0729 18:26:45.811414   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:45.811840   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:45.811880   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:45.811799   78947 retry.go:31] will retry after 1.30236943s: waiting for machine to come up
	I0729 18:26:47.116252   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:47.116668   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:47.116728   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:47.116647   78947 retry.go:31] will retry after 1.424333691s: waiting for machine to come up
	I0729 18:26:48.543481   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:48.543945   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:48.543973   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:48.543894   78947 retry.go:31] will retry after 2.106061522s: waiting for machine to come up
	I0729 18:26:44.798609   77627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:26:44.848236   77627 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:26:44.848257   77627 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:26:44.848265   77627 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.30.3 crio true true} ...
	I0729 18:26:44.848355   77627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-409322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:26:44.848415   77627 ssh_runner.go:195] Run: crio config
	I0729 18:26:44.901558   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:26:44.901584   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:26:44.901597   77627 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:26:44.901625   77627 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-409322 NodeName:embed-certs-409322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:26:44.901807   77627 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-409322"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:26:44.901875   77627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:26:44.912290   77627 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:26:44.912351   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:26:44.921801   77627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 18:26:44.940473   77627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:26:44.958445   77627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 18:26:44.976890   77627 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I0729 18:26:44.980974   77627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:26:44.994793   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:26:45.120453   77627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:26:45.138398   77627 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322 for IP: 192.168.39.58
	I0729 18:26:45.138419   77627 certs.go:194] generating shared ca certs ...
	I0729 18:26:45.138438   77627 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:26:45.138592   77627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:26:45.138643   77627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:26:45.138657   77627 certs.go:256] generating profile certs ...
	I0729 18:26:45.138751   77627 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/client.key
	I0729 18:26:45.138823   77627 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.key.4af4a6b9
	I0729 18:26:45.138889   77627 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.key
	I0729 18:26:45.139034   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:26:45.139074   77627 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:26:45.139088   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:26:45.139122   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:26:45.139161   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:26:45.139200   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:26:45.139305   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:45.139979   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:26:45.177194   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:26:45.206349   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:26:45.242291   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:26:45.277062   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 18:26:45.312447   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:26:45.345482   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:26:45.369151   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:26:45.394521   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:26:45.418579   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:26:45.443252   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:26:45.466770   77627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:26:45.484159   77627 ssh_runner.go:195] Run: openssl version
	I0729 18:26:45.490045   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:26:45.501166   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.505930   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.505988   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.511926   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:26:45.522860   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:26:45.533560   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.538411   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.538474   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.544485   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:26:45.555603   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:26:45.566407   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.570892   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.570944   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.576555   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:26:45.587780   77627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:26:45.592689   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:26:45.598981   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:26:45.604952   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:26:45.611225   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:26:45.617506   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:26:45.623744   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:26:45.629836   77627 kubeadm.go:392] StartCluster: {Name:embed-certs-409322 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:26:45.629947   77627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:26:45.630003   77627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:26:45.667768   77627 cri.go:89] found id: ""
	I0729 18:26:45.667853   77627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:26:45.678703   77627 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:26:45.678724   77627 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:26:45.678772   77627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:26:45.691979   77627 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:26:45.693237   77627 kubeconfig.go:125] found "embed-certs-409322" server: "https://192.168.39.58:8443"
	I0729 18:26:45.696093   77627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:26:45.708981   77627 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I0729 18:26:45.709017   77627 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:26:45.709030   77627 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:26:45.709088   77627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:26:45.748738   77627 cri.go:89] found id: ""
	I0729 18:26:45.748817   77627 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:26:45.775148   77627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:26:45.786631   77627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:26:45.786651   77627 kubeadm.go:157] found existing configuration files:
	
	I0729 18:26:45.786701   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:26:45.799453   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:26:45.799507   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:26:45.809691   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:26:45.819592   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:26:45.819638   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:26:45.832072   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:26:45.843769   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:26:45.843817   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:26:45.854649   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:26:45.863448   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:26:45.863504   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:26:45.872399   77627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:26:45.881992   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:46.012679   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.143076   77627 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.130359187s)
	I0729 18:26:47.143112   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.370854   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.446808   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.550087   77627 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:26:47.550191   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.050502   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.550499   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.608713   77627 api_server.go:72] duration metric: took 1.058625786s to wait for apiserver process to appear ...
	I0729 18:26:48.608745   77627 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:26:48.608773   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:51.829925   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:26:51.829963   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:26:51.829979   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:51.843474   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:26:51.843503   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:26:52.109882   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:52.117387   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:26:52.117415   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:26:52.608863   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:52.613809   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:26:52.613840   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:26:53.109430   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:53.115353   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I0729 18:26:53.122373   77627 api_server.go:141] control plane version: v1.30.3
	I0729 18:26:53.122411   77627 api_server.go:131] duration metric: took 4.513658045s to wait for apiserver health ...
	I0729 18:26:53.122420   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:26:53.122426   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:26:53.123807   77627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:26:50.651329   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:50.651724   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:50.651753   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:50.651678   78947 retry.go:31] will retry after 3.358167933s: waiting for machine to come up
	I0729 18:26:54.014102   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:54.014543   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:54.014576   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:54.014495   78947 retry.go:31] will retry after 4.372189125s: waiting for machine to come up
	I0729 18:26:53.124953   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:26:53.140970   77627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:26:53.179660   77627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:26:53.193885   77627 system_pods.go:59] 8 kube-system pods found
	I0729 18:26:53.193921   77627 system_pods.go:61] "coredns-7db6d8ff4d-vxvfc" [da2fd5a1-f57f-4374-99ee-9017e228176f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:26:53.193932   77627 system_pods.go:61] "etcd-embed-certs-409322" [3eca462f-6156-4858-a886-30d0d32faa35] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:26:53.193944   77627 system_pods.go:61] "kube-apiserver-embed-certs-409322" [4c6473c7-d7b8-4513-b800-7cab08748d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:26:53.193953   77627 system_pods.go:61] "kube-controller-manager-embed-certs-409322" [2dc47da0-3d24-49d8-91ae-13074468b423] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:26:53.193961   77627 system_pods.go:61] "kube-proxy-zf5jf" [a0b6fd82-d0b1-4821-a668-4cb6420b4860] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 18:26:53.193969   77627 system_pods.go:61] "kube-scheduler-embed-certs-409322" [ab422567-58e6-4f22-a7cf-391b35cc386c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:26:53.193977   77627 system_pods.go:61] "metrics-server-569cc877fc-flh27" [83d6c69c-200d-4ce2-80e9-b83ff5b6ebe9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:26:53.193989   77627 system_pods.go:61] "storage-provisioner" [73ff548f-26c3-4442-a9bd-bdac45261476] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 18:26:53.194002   77627 system_pods.go:74] duration metric: took 14.320361ms to wait for pod list to return data ...
	I0729 18:26:53.194014   77627 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:26:53.197826   77627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:26:53.197858   77627 node_conditions.go:123] node cpu capacity is 2
	I0729 18:26:53.197870   77627 node_conditions.go:105] duration metric: took 3.850077ms to run NodePressure ...
	I0729 18:26:53.197884   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:53.467868   77627 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:26:53.471886   77627 kubeadm.go:739] kubelet initialised
	I0729 18:26:53.471905   77627 kubeadm.go:740] duration metric: took 4.016417ms waiting for restarted kubelet to initialise ...
	I0729 18:26:53.471912   77627 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:26:53.476695   77627 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.480449   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.480481   77627 pod_ready.go:81] duration metric: took 3.766ms for pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.480491   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.480501   77627 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.484712   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "etcd-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.484739   77627 pod_ready.go:81] duration metric: took 4.228077ms for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.484750   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "etcd-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.484759   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.488510   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.488532   77627 pod_ready.go:81] duration metric: took 3.76371ms for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.488539   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.488545   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:58.387940   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.388358   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Found IP for machine: 192.168.61.244
	I0729 18:26:58.388383   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has current primary IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.388396   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Reserving static IP address...
	I0729 18:26:58.388794   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-502055", mac: "52:54:00:ae:63:e1", ip: "192.168.61.244"} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.388826   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Reserved static IP address: 192.168.61.244
	I0729 18:26:58.388848   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | skip adding static IP to network mk-default-k8s-diff-port-502055 - found existing host DHCP lease matching {name: "default-k8s-diff-port-502055", mac: "52:54:00:ae:63:e1", ip: "192.168.61.244"}
	I0729 18:26:58.388873   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for SSH to be available...
	I0729 18:26:58.388894   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Getting to WaitForSSH function...
	I0729 18:26:58.390937   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.391281   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.391319   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.391381   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Using SSH client type: external
	I0729 18:26:58.391408   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa (-rw-------)
	I0729 18:26:58.391457   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:26:58.391490   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | About to run SSH command:
	I0729 18:26:58.391511   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | exit 0
	I0729 18:26:58.518399   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | SSH cmd err, output: <nil>: 
	I0729 18:26:58.518782   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetConfigRaw
	I0729 18:26:58.519492   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:58.522245   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.522580   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.522615   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.522862   77859 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/config.json ...
	I0729 18:26:58.523037   77859 machine.go:94] provisionDockerMachine start ...
	I0729 18:26:58.523053   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:58.523258   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.525654   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.525998   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.526018   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.526185   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.526351   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.526555   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.526705   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.526874   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.527066   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.527079   77859 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:26:58.635267   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:26:58.635302   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.635524   77859 buildroot.go:166] provisioning hostname "default-k8s-diff-port-502055"
	I0729 18:26:58.635550   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.635789   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.638770   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.639235   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.639265   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.639371   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.639564   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.639729   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.639865   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.640048   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.640227   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.640245   77859 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-502055 && echo "default-k8s-diff-port-502055" | sudo tee /etc/hostname
	I0729 18:26:58.760577   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-502055
	
	I0729 18:26:58.760603   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.763294   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.763591   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.763625   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.763766   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.763970   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.764159   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.764311   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.764480   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.764641   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.764659   77859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-502055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-502055/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-502055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:26:58.879366   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:58.879400   77859 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:26:58.879440   77859 buildroot.go:174] setting up certificates
	I0729 18:26:58.879451   77859 provision.go:84] configureAuth start
	I0729 18:26:58.879463   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.879735   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:58.882335   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.882652   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.882680   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.882848   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.885023   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.885313   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.885339   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.885433   77859 provision.go:143] copyHostCerts
	I0729 18:26:58.885479   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:26:58.885488   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:26:58.885544   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:26:58.885633   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:26:58.885641   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:26:58.885660   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:26:58.885709   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:26:58.885716   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:26:58.885733   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:26:58.885783   77859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-502055 san=[127.0.0.1 192.168.61.244 default-k8s-diff-port-502055 localhost minikube]
	I0729 18:26:59.130657   77859 provision.go:177] copyRemoteCerts
	I0729 18:26:59.130724   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:26:59.130749   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.133536   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.133898   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.133922   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.134079   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.134260   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.134421   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.134530   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.216614   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 18:26:59.240540   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:26:59.267350   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:26:59.294003   77859 provision.go:87] duration metric: took 414.539559ms to configureAuth
	I0729 18:26:59.294032   77859 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:26:59.294222   77859 config.go:182] Loaded profile config "default-k8s-diff-port-502055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:26:59.294293   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.296911   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.297285   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.297311   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.297450   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.297656   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.297804   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.297935   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.298102   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:59.298265   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:59.298281   77859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:26:59.557084   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:26:59.557131   77859 machine.go:97] duration metric: took 1.034080964s to provisionDockerMachine
	I0729 18:26:59.557148   77859 start.go:293] postStartSetup for "default-k8s-diff-port-502055" (driver="kvm2")
	I0729 18:26:59.557165   77859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:26:59.557191   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.557496   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:26:59.557529   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.559962   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.560255   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.560276   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.560461   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.560635   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.560798   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.560953   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.645623   77859 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:26:59.650416   77859 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:26:59.650447   77859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:26:59.650531   77859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:26:59.650624   77859 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:26:59.650730   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:26:59.660864   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:59.685728   77859 start.go:296] duration metric: took 128.564534ms for postStartSetup
	I0729 18:26:59.685767   77859 fix.go:56] duration metric: took 20.122314731s for fixHost
	I0729 18:26:59.685791   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.688401   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.688773   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.688801   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.688978   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.689157   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.689293   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.689401   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.689551   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:59.689712   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:59.689722   77859 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:26:55.494570   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:26:57.495784   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:26:59.799712   78080 start.go:364] duration metric: took 4m12.475660562s to acquireMachinesLock for "old-k8s-version-386663"
	I0729 18:26:59.799786   78080 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:59.799796   78080 fix.go:54] fixHost starting: 
	I0729 18:26:59.800184   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:59.800215   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:59.816885   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37963
	I0729 18:26:59.817336   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:59.817822   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:26:59.817851   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:59.818283   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:59.818505   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:26:59.818671   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetState
	I0729 18:26:59.820232   78080 fix.go:112] recreateIfNeeded on old-k8s-version-386663: state=Stopped err=<nil>
	I0729 18:26:59.820254   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	W0729 18:26:59.820426   78080 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:59.822140   78080 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-386663" ...
	I0729 18:26:59.799573   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277619.755982716
	
	I0729 18:26:59.799603   77859 fix.go:216] guest clock: 1722277619.755982716
	I0729 18:26:59.799614   77859 fix.go:229] Guest: 2024-07-29 18:26:59.755982716 +0000 UTC Remote: 2024-07-29 18:26:59.685771603 +0000 UTC m=+259.980298680 (delta=70.211113ms)
	I0729 18:26:59.799637   77859 fix.go:200] guest clock delta is within tolerance: 70.211113ms
	I0729 18:26:59.799641   77859 start.go:83] releasing machines lock for "default-k8s-diff-port-502055", held for 20.236230068s
	I0729 18:26:59.799672   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.799944   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:59.802636   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.802983   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.803013   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.803248   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.803740   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.803927   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.804023   77859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:26:59.804070   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.804193   77859 ssh_runner.go:195] Run: cat /version.json
	I0729 18:26:59.804229   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.807037   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807117   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807395   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.807435   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807528   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.807547   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.807565   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807708   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.807717   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.807910   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.807936   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.808043   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.808098   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.808244   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.920371   77859 ssh_runner.go:195] Run: systemctl --version
	I0729 18:26:59.926620   77859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:00.072161   77859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:00.079273   77859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:00.079340   77859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:00.096528   77859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:00.096550   77859 start.go:495] detecting cgroup driver to use...
	I0729 18:27:00.096610   77859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:00.113690   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:00.129058   77859 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:00.129126   77859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:00.143930   77859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:00.158085   77859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:00.296398   77859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:00.482313   77859 docker.go:233] disabling docker service ...
	I0729 18:27:00.482459   77859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:00.501504   77859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:00.520932   77859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:00.657805   77859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:00.792064   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:00.807790   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:00.827373   77859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:27:00.827423   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.838281   77859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:00.838340   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.849533   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.860820   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.872359   77859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:00.883904   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.895589   77859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.914639   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.926278   77859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:00.936329   77859 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:00.936383   77859 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:00.951219   77859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:00.966530   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:01.086665   77859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:01.233627   77859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:01.233703   77859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:01.241055   77859 start.go:563] Will wait 60s for crictl version
	I0729 18:27:01.241122   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:27:01.244875   77859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:01.284013   77859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:01.284103   77859 ssh_runner.go:195] Run: crio --version
	I0729 18:27:01.315493   77859 ssh_runner.go:195] Run: crio --version
	I0729 18:27:01.348781   77859 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:26:59.823421   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .Start
	I0729 18:26:59.823575   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring networks are active...
	I0729 18:26:59.824264   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring network default is active
	I0729 18:26:59.824641   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring network mk-old-k8s-version-386663 is active
	I0729 18:26:59.825024   78080 main.go:141] libmachine: (old-k8s-version-386663) Getting domain xml...
	I0729 18:26:59.825885   78080 main.go:141] libmachine: (old-k8s-version-386663) Creating domain...
	I0729 18:27:01.104265   78080 main.go:141] libmachine: (old-k8s-version-386663) Waiting to get IP...
	I0729 18:27:01.105349   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.105790   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.105836   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.105761   79098 retry.go:31] will retry after 308.255094ms: waiting for machine to come up
	I0729 18:27:01.415431   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.415999   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.416030   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.415952   79098 retry.go:31] will retry after 236.525723ms: waiting for machine to come up
	I0729 18:27:01.654767   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.655279   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.655312   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.655247   79098 retry.go:31] will retry after 311.010394ms: waiting for machine to come up
	I0729 18:27:01.967850   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.968374   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.968404   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.968333   79098 retry.go:31] will retry after 468.477549ms: waiting for machine to come up
	I0729 18:27:01.350059   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:27:01.352945   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:01.353398   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:01.353429   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:01.353630   77859 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:01.357955   77859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:01.371879   77859 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-502055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:01.372034   77859 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:27:01.372100   77859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:01.412356   77859 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:27:01.412423   77859 ssh_runner.go:195] Run: which lz4
	I0729 18:27:01.417768   77859 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:27:01.422809   77859 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:27:01.422836   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:27:02.909800   77859 crio.go:462] duration metric: took 1.492088664s to copy over tarball
	I0729 18:27:02.909868   77859 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:26:59.995351   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:01.999130   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:04.012357   77627 pod_ready.go:92] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.012385   77627 pod_ready.go:81] duration metric: took 10.523832262s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.012398   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zf5jf" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.025409   77627 pod_ready.go:92] pod "kube-proxy-zf5jf" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.025448   77627 pod_ready.go:81] duration metric: took 13.042254ms for pod "kube-proxy-zf5jf" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.025461   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.036057   77627 pod_ready.go:92] pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.036078   77627 pod_ready.go:81] duration metric: took 10.608531ms for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.036090   77627 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:02.438066   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:02.438657   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:02.438686   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:02.438618   79098 retry.go:31] will retry after 601.056921ms: waiting for machine to come up
	I0729 18:27:03.041582   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:03.042097   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:03.042127   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:03.042040   79098 retry.go:31] will retry after 712.049848ms: waiting for machine to come up
	I0729 18:27:03.755536   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:03.756010   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:03.756040   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:03.755988   79098 retry.go:31] will retry after 1.092318096s: waiting for machine to come up
	I0729 18:27:04.849745   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:04.850202   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:04.850226   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:04.850147   79098 retry.go:31] will retry after 903.54457ms: waiting for machine to come up
	I0729 18:27:05.754781   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:05.755193   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:05.755218   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:05.755157   79098 retry.go:31] will retry after 1.693512671s: waiting for machine to come up
	I0729 18:27:05.188101   77859 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.27820184s)
	I0729 18:27:05.188132   77859 crio.go:469] duration metric: took 2.278304723s to extract the tarball
	I0729 18:27:05.188140   77859 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:27:05.227453   77859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:05.274530   77859 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:27:05.274560   77859 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:27:05.274571   77859 kubeadm.go:934] updating node { 192.168.61.244 8444 v1.30.3 crio true true} ...
	I0729 18:27:05.274708   77859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-502055 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:05.274788   77859 ssh_runner.go:195] Run: crio config
	I0729 18:27:05.320697   77859 cni.go:84] Creating CNI manager for ""
	I0729 18:27:05.320725   77859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:05.320741   77859 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:05.320774   77859 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.244 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-502055 NodeName:default-k8s-diff-port-502055 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:27:05.320948   77859 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.244
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-502055"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:05.321028   77859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:27:05.331541   77859 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:05.331609   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:05.341433   77859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 18:27:05.358696   77859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:27:05.376531   77859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 18:27:05.394349   77859 ssh_runner.go:195] Run: grep 192.168.61.244	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:05.398156   77859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:05.411839   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:05.561467   77859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:05.583184   77859 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055 for IP: 192.168.61.244
	I0729 18:27:05.583209   77859 certs.go:194] generating shared ca certs ...
	I0729 18:27:05.583251   77859 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:05.583406   77859 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:05.583460   77859 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:05.583473   77859 certs.go:256] generating profile certs ...
	I0729 18:27:05.583577   77859 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/client.key
	I0729 18:27:05.583642   77859 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.key.2edc4448
	I0729 18:27:05.583692   77859 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.key
	I0729 18:27:05.583835   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:05.583872   77859 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:05.583886   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:05.583917   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:05.583957   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:05.583991   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:05.584048   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:05.584726   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:05.624996   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:05.670153   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:05.715354   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:05.743807   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 18:27:05.777366   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:05.802152   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:05.826974   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:27:05.850417   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:05.873185   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:05.899387   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:05.927963   77859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:05.947817   77859 ssh_runner.go:195] Run: openssl version
	I0729 18:27:05.955635   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:05.969765   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.974559   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.974606   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.980557   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:05.991819   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:06.004961   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.009999   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.010074   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.016045   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:06.027698   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:06.039648   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.045057   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.045130   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.051127   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:06.062761   77859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:06.068832   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:06.076652   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:06.084517   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:06.091125   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:06.097346   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:06.103428   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:06.109312   77859 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-502055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:06.109403   77859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:06.109440   77859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:06.153439   77859 cri.go:89] found id: ""
	I0729 18:27:06.153528   77859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:06.166412   77859 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:06.166434   77859 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:06.166486   77859 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:06.183064   77859 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:06.184168   77859 kubeconfig.go:125] found "default-k8s-diff-port-502055" server: "https://192.168.61.244:8444"
	I0729 18:27:06.186283   77859 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:06.197418   77859 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.244
	I0729 18:27:06.197444   77859 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:06.197454   77859 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:06.197506   77859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:06.237753   77859 cri.go:89] found id: ""
	I0729 18:27:06.237839   77859 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:06.257323   77859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:06.269157   77859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:06.269176   77859 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:06.269229   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 18:27:06.279313   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:06.279369   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:06.292141   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 18:27:06.303961   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:06.304028   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:06.316051   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 18:27:06.328004   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:06.328064   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:06.340357   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 18:27:06.352021   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:06.352068   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:06.364479   77859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:06.375313   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:06.498692   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:07.853845   77859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.355105254s)
	I0729 18:27:07.853882   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.069616   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.144574   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.225236   77859 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:08.225336   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:08.725789   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:09.226271   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:09.270268   77859 api_server.go:72] duration metric: took 1.045028259s to wait for apiserver process to appear ...
	I0729 18:27:09.270298   77859 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:27:09.270320   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:09.270877   77859 api_server.go:269] stopped: https://192.168.61.244:8444/healthz: Get "https://192.168.61.244:8444/healthz": dial tcp 192.168.61.244:8444: connect: connection refused
	I0729 18:27:06.043838   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:08.044382   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:07.451087   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:07.451659   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:07.451688   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:07.451607   79098 retry.go:31] will retry after 1.734643072s: waiting for machine to come up
	I0729 18:27:09.188407   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:09.188963   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:09.188997   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:09.188900   79098 retry.go:31] will retry after 2.010973572s: waiting for machine to come up
	I0729 18:27:11.201171   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:11.201586   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:11.201620   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:11.201535   79098 retry.go:31] will retry after 3.178533437s: waiting for machine to come up
	I0729 18:27:09.771273   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.506136   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:27:12.506166   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:27:12.506179   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.518847   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:27:12.518881   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:27:12.771281   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.775798   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:27:12.775832   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:27:13.270383   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:13.281935   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:27:13.281975   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:27:13.770440   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:13.776004   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 200:
	ok
	I0729 18:27:13.783210   77859 api_server.go:141] control plane version: v1.30.3
	I0729 18:27:13.783237   77859 api_server.go:131] duration metric: took 4.512933596s to wait for apiserver health ...
	I0729 18:27:13.783247   77859 cni.go:84] Creating CNI manager for ""
	I0729 18:27:13.783253   77859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:13.785148   77859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:27:13.786485   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:27:13.814986   77859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:27:13.860557   77859 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:27:13.872823   77859 system_pods.go:59] 8 kube-system pods found
	I0729 18:27:13.872864   77859 system_pods.go:61] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:27:13.872871   77859 system_pods.go:61] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:27:13.872879   77859 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:27:13.872885   77859 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:27:13.872891   77859 system_pods.go:61] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 18:27:13.872898   77859 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:27:13.872903   77859 system_pods.go:61] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:27:13.872912   77859 system_pods.go:61] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 18:27:13.872920   77859 system_pods.go:74] duration metric: took 12.342162ms to wait for pod list to return data ...
	I0729 18:27:13.872929   77859 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:27:13.879353   77859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:27:13.879384   77859 node_conditions.go:123] node cpu capacity is 2
	I0729 18:27:13.879396   77859 node_conditions.go:105] duration metric: took 6.459994ms to run NodePressure ...
	I0729 18:27:13.879416   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:14.172203   77859 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:27:14.178467   77859 kubeadm.go:739] kubelet initialised
	I0729 18:27:14.178490   77859 kubeadm.go:740] duration metric: took 6.259862ms waiting for restarted kubelet to initialise ...
	I0729 18:27:14.178499   77859 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:14.184872   77859 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.190847   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.190871   77859 pod_ready.go:81] duration metric: took 5.974917ms for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.190879   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.190886   77859 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.195570   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.195593   77859 pod_ready.go:81] duration metric: took 4.699847ms for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.195603   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.195610   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.199460   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.199480   77859 pod_ready.go:81] duration metric: took 3.863218ms for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.199489   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.199494   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.264725   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.264759   77859 pod_ready.go:81] duration metric: took 65.256372ms for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.264774   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.264781   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.664064   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-proxy-cgdm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.664089   77859 pod_ready.go:81] duration metric: took 399.300184ms for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.664100   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-proxy-cgdm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.664109   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:10.044797   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:12.543553   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:15.064029   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.064059   77859 pod_ready.go:81] duration metric: took 399.939139ms for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:15.064074   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.064082   77859 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:15.464538   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.464564   77859 pod_ready.go:81] duration metric: took 400.472397ms for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:15.464584   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.464592   77859 pod_ready.go:38] duration metric: took 1.286083847s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:15.464609   77859 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:27:15.478197   77859 ops.go:34] apiserver oom_adj: -16
	I0729 18:27:15.478220   77859 kubeadm.go:597] duration metric: took 9.311779975s to restartPrimaryControlPlane
	I0729 18:27:15.478229   77859 kubeadm.go:394] duration metric: took 9.368934157s to StartCluster
	I0729 18:27:15.478247   77859 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:15.478311   77859 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:27:15.479920   77859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:15.480159   77859 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:27:15.480244   77859 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:27:15.480322   77859 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-502055"
	I0729 18:27:15.480355   77859 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-502055"
	I0729 18:27:15.480356   77859 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-502055"
	W0729 18:27:15.480368   77859 addons.go:243] addon storage-provisioner should already be in state true
	I0729 18:27:15.480371   77859 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-502055"
	I0729 18:27:15.480396   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.480397   77859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-502055"
	I0729 18:27:15.480402   77859 config.go:182] Loaded profile config "default-k8s-diff-port-502055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:27:15.480415   77859 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-502055"
	W0729 18:27:15.480426   77859 addons.go:243] addon metrics-server should already be in state true
	I0729 18:27:15.480460   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.480709   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480723   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480738   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.480738   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.480914   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480943   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.482004   77859 out.go:177] * Verifying Kubernetes components...
	I0729 18:27:15.483504   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:15.495748   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35469
	I0729 18:27:15.495965   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43251
	I0729 18:27:15.495977   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
	I0729 18:27:15.496147   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496324   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496433   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496604   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496622   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496760   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496778   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496914   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496930   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496982   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497086   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497219   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497644   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.497672   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.498076   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.498408   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.498449   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.501769   77859 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-502055"
	W0729 18:27:15.501790   77859 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:27:15.501814   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.502132   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.502163   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.516862   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0729 18:27:15.517070   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0729 18:27:15.517336   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.517525   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.517845   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.517877   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.518255   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.518356   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.518418   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.518657   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.518793   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.519009   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.520045   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I0729 18:27:15.520489   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.520613   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.520785   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.520962   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.520979   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.521295   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.521697   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.521712   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.522950   77859 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:15.522950   77859 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:27:15.524246   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:27:15.524268   77859 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:27:15.524291   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.524355   77859 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:27:15.524370   77859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:27:15.524388   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.527946   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528008   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528609   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.528645   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528678   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.528691   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528723   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.528939   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.528953   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.529101   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.529150   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.529218   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.529524   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.529716   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.539969   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41273
	I0729 18:27:15.540410   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.540887   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.540913   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.541351   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.541675   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.543494   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.543728   77859 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:27:15.543744   77859 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:27:15.543762   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.546809   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.547225   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.547250   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.547405   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.547595   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.547736   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.547859   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.662741   77859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:15.681179   77859 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-502055" to be "Ready" ...
	I0729 18:27:15.754691   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:27:15.767498   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:27:15.767515   77859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:27:15.781857   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:27:15.801619   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:27:15.801645   77859 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:27:15.823663   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:27:15.823690   77859 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:27:15.847827   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:27:16.818178   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.063432468s)
	I0729 18:27:16.818180   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.036288517s)
	I0729 18:27:16.818268   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818234   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818290   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818307   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818677   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.818680   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.818694   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.818710   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.818723   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818724   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.818735   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818740   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.818755   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818766   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818989   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.819000   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.819004   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.819017   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.819014   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.824028   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.824047   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.824268   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.824292   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.877321   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029455089s)
	I0729 18:27:16.877378   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.877393   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.877718   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.877767   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.877790   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.877801   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.878030   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.878047   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.878061   77859 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-502055"
	I0729 18:27:16.879704   77859 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 18:27:14.381238   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:14.381648   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:14.381677   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:14.381609   79098 retry.go:31] will retry after 4.005160817s: waiting for machine to come up
	I0729 18:27:16.880972   77859 addons.go:510] duration metric: took 1.400728317s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 18:27:17.685480   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:19.687853   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.042487   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:17.043250   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:19.045374   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:19.859418   77394 start.go:364] duration metric: took 54.906462088s to acquireMachinesLock for "no-preload-888056"
	I0729 18:27:19.859470   77394 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:27:19.859478   77394 fix.go:54] fixHost starting: 
	I0729 18:27:19.859850   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:19.859896   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:19.876798   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46323
	I0729 18:27:19.877254   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:19.877674   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:27:19.877709   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:19.878087   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:19.878257   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:19.878399   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:27:19.879875   77394 fix.go:112] recreateIfNeeded on no-preload-888056: state=Stopped err=<nil>
	I0729 18:27:19.879909   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	W0729 18:27:19.880054   77394 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:27:19.882098   77394 out.go:177] * Restarting existing kvm2 VM for "no-preload-888056" ...
	I0729 18:27:18.388470   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.388971   78080 main.go:141] libmachine: (old-k8s-version-386663) Found IP for machine: 192.168.50.70
	I0729 18:27:18.388989   78080 main.go:141] libmachine: (old-k8s-version-386663) Reserving static IP address...
	I0729 18:27:18.388999   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has current primary IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.389431   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "old-k8s-version-386663", mac: "52:54:00:78:b6:ac", ip: "192.168.50.70"} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.389459   78080 main.go:141] libmachine: (old-k8s-version-386663) Reserved static IP address: 192.168.50.70
	I0729 18:27:18.389477   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | skip adding static IP to network mk-old-k8s-version-386663 - found existing host DHCP lease matching {name: "old-k8s-version-386663", mac: "52:54:00:78:b6:ac", ip: "192.168.50.70"}
	I0729 18:27:18.389493   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Getting to WaitForSSH function...
	I0729 18:27:18.389515   78080 main.go:141] libmachine: (old-k8s-version-386663) Waiting for SSH to be available...
	I0729 18:27:18.391523   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.391916   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.391941   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.392062   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using SSH client type: external
	I0729 18:27:18.392088   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa (-rw-------)
	I0729 18:27:18.392119   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:27:18.392134   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | About to run SSH command:
	I0729 18:27:18.392150   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | exit 0
	I0729 18:27:18.514735   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | SSH cmd err, output: <nil>: 
	I0729 18:27:18.515114   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetConfigRaw
	I0729 18:27:18.515736   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:18.518194   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.518615   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.518651   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.518879   78080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json ...
	I0729 18:27:18.519090   78080 machine.go:94] provisionDockerMachine start ...
	I0729 18:27:18.519113   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:18.519322   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.521434   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.521824   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.521846   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.521996   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.522181   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.522349   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.522514   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.522724   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.522960   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.522975   78080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:27:18.622960   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:27:18.622989   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.623249   78080 buildroot.go:166] provisioning hostname "old-k8s-version-386663"
	I0729 18:27:18.623277   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.623461   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.626009   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.626376   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.626406   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.626649   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.626876   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.627141   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.627301   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.627474   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.627669   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.627683   78080 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386663 && echo "old-k8s-version-386663" | sudo tee /etc/hostname
	I0729 18:27:18.748137   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386663
	
	I0729 18:27:18.748165   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.751546   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.751882   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.751916   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.752086   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.752270   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.752409   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.752550   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.752747   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.753004   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.753031   78080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386663/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:27:18.863358   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:27:18.863389   78080 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:27:18.863415   78080 buildroot.go:174] setting up certificates
	I0729 18:27:18.863425   78080 provision.go:84] configureAuth start
	I0729 18:27:18.863436   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.863754   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:18.866285   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.866641   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.866668   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.866797   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.868886   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.869241   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.869270   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.869404   78080 provision.go:143] copyHostCerts
	I0729 18:27:18.869459   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:27:18.869468   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:27:18.869522   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:27:18.869614   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:27:18.869624   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:27:18.869652   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:27:18.869740   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:27:18.869750   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:27:18.869772   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:27:18.869833   78080 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386663 san=[127.0.0.1 192.168.50.70 localhost minikube old-k8s-version-386663]
	I0729 18:27:19.142743   78080 provision.go:177] copyRemoteCerts
	I0729 18:27:19.142808   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:27:19.142842   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.145484   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.145843   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.145872   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.146092   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.146334   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.146532   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.146692   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.230725   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:27:19.255862   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 18:27:19.290922   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:27:19.317519   78080 provision.go:87] duration metric: took 454.081583ms to configureAuth
	I0729 18:27:19.317549   78080 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:27:19.317766   78080 config.go:182] Loaded profile config "old-k8s-version-386663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:27:19.317854   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.320636   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.321074   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.321110   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.321346   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.321603   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.321782   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.321959   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.322158   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:19.322336   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:19.322351   78080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:27:19.626713   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:27:19.626737   78080 machine.go:97] duration metric: took 1.107631867s to provisionDockerMachine
	I0729 18:27:19.626749   78080 start.go:293] postStartSetup for "old-k8s-version-386663" (driver="kvm2")
	I0729 18:27:19.626763   78080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:27:19.626834   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.627168   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:27:19.627197   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.629389   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.629751   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.629782   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.629907   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.630102   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.630302   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.630460   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.709702   78080 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:27:19.713879   78080 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:27:19.713913   78080 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:27:19.713994   78080 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:27:19.714093   78080 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:27:19.714215   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:27:19.725226   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:19.751727   78080 start.go:296] duration metric: took 124.964072ms for postStartSetup
	I0729 18:27:19.751767   78080 fix.go:56] duration metric: took 19.951972224s for fixHost
	I0729 18:27:19.751796   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.754481   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.754843   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.754877   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.755107   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.755321   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.755482   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.755663   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.755829   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:19.756012   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:19.756024   78080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:27:19.859279   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277639.831700968
	
	I0729 18:27:19.859302   78080 fix.go:216] guest clock: 1722277639.831700968
	I0729 18:27:19.859309   78080 fix.go:229] Guest: 2024-07-29 18:27:19.831700968 +0000 UTC Remote: 2024-07-29 18:27:19.751770935 +0000 UTC m=+272.565043390 (delta=79.930033ms)
	I0729 18:27:19.859327   78080 fix.go:200] guest clock delta is within tolerance: 79.930033ms
	I0729 18:27:19.859332   78080 start.go:83] releasing machines lock for "old-k8s-version-386663", held for 20.059569122s
	I0729 18:27:19.859353   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.859661   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:19.862741   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.863225   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.863261   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.863449   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864092   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864309   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864392   78080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:27:19.864432   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.864547   78080 ssh_runner.go:195] Run: cat /version.json
	I0729 18:27:19.864572   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.867636   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.867798   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868019   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.868044   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868178   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.868330   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.868356   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868360   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.868500   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.868587   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.868667   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.868754   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.868910   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.869046   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.947441   78080 ssh_runner.go:195] Run: systemctl --version
	I0729 18:27:19.967868   78080 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:20.114336   78080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:20.121716   78080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:20.121793   78080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:20.143272   78080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:20.143298   78080 start.go:495] detecting cgroup driver to use...
	I0729 18:27:20.143385   78080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:20.162433   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:20.178310   78080 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:20.178397   78080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:20.194091   78080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:20.209796   78080 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:20.341466   78080 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:20.514215   78080 docker.go:233] disabling docker service ...
	I0729 18:27:20.514338   78080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:20.531018   78080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:20.551839   78080 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:20.680430   78080 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:20.834782   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:20.852454   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:20.874962   78080 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 18:27:20.875017   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.886550   78080 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:20.886619   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.899344   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.914254   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.927308   78080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:20.939807   78080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:20.951648   78080 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:20.951738   78080 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:20.967918   78080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:20.979872   78080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:21.125398   78080 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:21.290736   78080 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:21.290816   78080 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:21.296922   78080 start.go:563] Will wait 60s for crictl version
	I0729 18:27:21.296987   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:21.302200   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:21.350783   78080 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:21.350919   78080 ssh_runner.go:195] Run: crio --version
	I0729 18:27:21.391539   78080 ssh_runner.go:195] Run: crio --version
	I0729 18:27:21.441225   78080 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 18:27:21.442583   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:21.446238   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:21.446728   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:21.446756   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:21.446988   78080 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:21.452537   78080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:21.470394   78080 kubeadm.go:883] updating cluster {Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:21.470555   78080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:27:21.470610   78080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:21.531670   78080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:27:21.531742   78080 ssh_runner.go:195] Run: which lz4
	I0729 18:27:21.536436   78080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:27:21.542100   78080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:27:21.542139   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 18:27:19.883514   77394 main.go:141] libmachine: (no-preload-888056) Calling .Start
	I0729 18:27:19.883693   77394 main.go:141] libmachine: (no-preload-888056) Ensuring networks are active...
	I0729 18:27:19.884447   77394 main.go:141] libmachine: (no-preload-888056) Ensuring network default is active
	I0729 18:27:19.884847   77394 main.go:141] libmachine: (no-preload-888056) Ensuring network mk-no-preload-888056 is active
	I0729 18:27:19.885240   77394 main.go:141] libmachine: (no-preload-888056) Getting domain xml...
	I0729 18:27:19.886133   77394 main.go:141] libmachine: (no-preload-888056) Creating domain...
	I0729 18:27:21.226599   77394 main.go:141] libmachine: (no-preload-888056) Waiting to get IP...
	I0729 18:27:21.227673   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.228215   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.228278   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.228178   79288 retry.go:31] will retry after 290.676407ms: waiting for machine to come up
	I0729 18:27:21.520818   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.521458   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.521480   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.521360   79288 retry.go:31] will retry after 266.145355ms: waiting for machine to come up
	I0729 18:27:21.789603   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.790170   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.790200   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.790137   79288 retry.go:31] will retry after 464.137123ms: waiting for machine to come up
	I0729 18:27:22.255586   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:22.256159   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:22.256184   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:22.256098   79288 retry.go:31] will retry after 562.330595ms: waiting for machine to come up
	I0729 18:27:21.691280   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:23.188725   77859 node_ready.go:49] node "default-k8s-diff-port-502055" has status "Ready":"True"
	I0729 18:27:23.188758   77859 node_ready.go:38] duration metric: took 7.507549954s for node "default-k8s-diff-port-502055" to be "Ready" ...
	I0729 18:27:23.188772   77859 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:23.197714   77859 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.204037   77859 pod_ready.go:92] pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:23.204065   77859 pod_ready.go:81] duration metric: took 6.32123ms for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.204086   77859 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.211765   77859 pod_ready.go:92] pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:23.211791   77859 pod_ready.go:81] duration metric: took 7.69614ms for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.211803   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:21.544757   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:24.043649   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:23.329902   78080 crio.go:462] duration metric: took 1.793505279s to copy over tarball
	I0729 18:27:23.329979   78080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:27:26.453768   78080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.123735537s)
	I0729 18:27:26.453800   78080 crio.go:469] duration metric: took 3.123869338s to extract the tarball
	I0729 18:27:26.453809   78080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:27:26.501748   78080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:26.538093   78080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:27:26.538124   78080 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:27:26.538226   78080 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.538297   78080 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 18:27:26.538387   78080 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.538232   78080 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.538441   78080 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.538303   78080 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.538277   78080 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.538783   78080 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.540806   78080 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.540823   78080 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.540847   78080 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.540858   78080 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 18:27:26.540806   78080 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.540894   78080 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.540937   78080 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.540987   78080 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.700993   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.704402   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.712647   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.714034   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.715935   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.753888   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.758588   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 18:27:26.837981   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.844473   78080 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 18:27:26.844532   78080 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.844578   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.877082   78080 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 18:27:26.877134   78080 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.877183   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.889792   78080 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 18:27:26.889887   78080 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.889842   78080 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 18:27:26.889944   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.889983   78080 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.890034   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.916338   78080 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 18:27:26.916388   78080 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.916440   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.916437   78080 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 18:27:26.916540   78080 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.916581   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.942747   78080 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 18:27:26.942794   78080 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 18:27:26.942839   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:27.056976   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:27.056976   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 18:27:27.057045   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:27.057071   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:27.057101   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:27.057152   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:27.057178   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 18:27:27.219396   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 18:27:22.820490   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:22.820969   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:22.820993   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:22.820906   79288 retry.go:31] will retry after 728.452145ms: waiting for machine to come up
	I0729 18:27:23.550655   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:23.551337   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:23.551361   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:23.551287   79288 retry.go:31] will retry after 782.583051ms: waiting for machine to come up
	I0729 18:27:24.335785   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:24.336257   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:24.336310   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:24.336235   79288 retry.go:31] will retry after 1.040109521s: waiting for machine to come up
	I0729 18:27:25.377676   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:25.378187   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:25.378231   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:25.378153   79288 retry.go:31] will retry after 1.276093038s: waiting for machine to come up
	I0729 18:27:26.655479   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:26.655922   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:26.655950   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:26.655872   79288 retry.go:31] will retry after 1.267687539s: waiting for machine to come up
	I0729 18:27:25.219175   77859 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.225735   77859 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.718741   77859 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.718772   77859 pod_ready.go:81] duration metric: took 4.506959705s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.718786   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.723687   77859 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.723709   77859 pod_ready.go:81] duration metric: took 4.915901ms for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.723720   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.728504   77859 pod_ready.go:92] pod "kube-proxy-cgdm8" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.728526   77859 pod_ready.go:81] duration metric: took 4.797185ms for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.728538   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.733036   77859 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.733061   77859 pod_ready.go:81] duration metric: took 4.514471ms for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.733073   77859 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:29.739966   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:26.044607   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:28.543664   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.219541   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 18:27:27.223329   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 18:27:27.223406   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 18:27:27.223450   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 18:27:27.223492   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 18:27:27.223536   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 18:27:27.223567   78080 cache_images.go:92] duration metric: took 685.427642ms to LoadCachedImages
	W0729 18:27:27.223653   78080 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0729 18:27:27.223672   78080 kubeadm.go:934] updating node { 192.168.50.70 8443 v1.20.0 crio true true} ...
	I0729 18:27:27.223785   78080 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386663 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:27.223866   78080 ssh_runner.go:195] Run: crio config
	I0729 18:27:27.273186   78080 cni.go:84] Creating CNI manager for ""
	I0729 18:27:27.273207   78080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:27.273217   78080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:27.273241   78080 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.70 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386663 NodeName:old-k8s-version-386663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 18:27:27.273424   78080 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386663"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:27.273498   78080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 18:27:27.285247   78080 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:27.285327   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:27.295747   78080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 18:27:27.314192   78080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:27:27.331654   78080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 18:27:27.351717   78080 ssh_runner.go:195] Run: grep 192.168.50.70	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:27.356205   78080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:27.370446   78080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:27.509250   78080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:27.528776   78080 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663 for IP: 192.168.50.70
	I0729 18:27:27.528804   78080 certs.go:194] generating shared ca certs ...
	I0729 18:27:27.528823   78080 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:27.528991   78080 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:27.529045   78080 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:27.529061   78080 certs.go:256] generating profile certs ...
	I0729 18:27:27.529194   78080 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/client.key
	I0729 18:27:27.529308   78080 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key.71ea3f9f
	I0729 18:27:27.529364   78080 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key
	I0729 18:27:27.529529   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:27.529569   78080 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:27.529584   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:27.529614   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:27.529645   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:27.529689   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:27.529751   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:27.530573   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:27.582122   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:27.626846   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:27.663609   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:27.700294   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 18:27:27.746614   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:27.785212   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:27.834479   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:27:27.866939   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:27.892613   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:27.919059   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:27.947557   78080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:27.968625   78080 ssh_runner.go:195] Run: openssl version
	I0729 18:27:27.976500   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:27.991016   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:27.996228   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:27.996285   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:28.002529   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:28.013844   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:28.025388   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.029982   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.030042   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.036362   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:28.050134   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:28.062742   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.067240   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.067293   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.072973   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:28.084143   78080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:28.089526   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:28.096556   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:28.103044   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:28.109337   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:28.115455   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:28.121449   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:28.127395   78080 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:28.127504   78080 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:28.127581   78080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:28.176772   78080 cri.go:89] found id: ""
	I0729 18:27:28.176837   78080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:28.187955   78080 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:28.187979   78080 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:28.188034   78080 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:28.197926   78080 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:28.199364   78080 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-386663" does not appear in /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:27:28.200382   78080 kubeconfig.go:62] /home/jenkins/minikube-integration/19345-11206/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-386663" cluster setting kubeconfig missing "old-k8s-version-386663" context setting]
	I0729 18:27:28.201737   78080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:28.287712   78080 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:28.300675   78080 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.70
	I0729 18:27:28.300716   78080 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:28.300728   78080 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:28.300795   78080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:28.343880   78080 cri.go:89] found id: ""
	I0729 18:27:28.343962   78080 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:28.362391   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:28.372805   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:28.372830   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:28.372882   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:27:28.383540   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:28.383629   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:28.396564   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:27:28.409151   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:28.409208   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:28.422243   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:27:28.434736   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:28.434839   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:28.447681   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:27:28.460008   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:28.460073   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:28.472647   78080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:28.484179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:28.634526   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.206575   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.449626   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.550859   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.681945   78080 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:29.682015   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:30.182098   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:30.682977   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:31.182152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:31.682468   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:32.183031   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:27.924957   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:27.925430   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:27.925461   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:27.925378   79288 retry.go:31] will retry after 1.455979038s: waiting for machine to come up
	I0729 18:27:29.383257   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:29.383769   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:29.383793   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:29.383722   79288 retry.go:31] will retry after 1.862834258s: waiting for machine to come up
	I0729 18:27:31.248806   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:31.249394   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:31.249414   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:31.249344   79288 retry.go:31] will retry after 3.203097967s: waiting for machine to come up
	I0729 18:27:32.242350   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:34.738663   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:31.043735   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:33.543152   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:32.682567   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:33.182100   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:33.682494   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.183075   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.683115   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:35.183094   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:35.683092   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:36.182173   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:36.682843   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:37.182324   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.453552   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:34.453906   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:34.453930   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:34.453852   79288 retry.go:31] will retry after 3.166208105s: waiting for machine to come up
	I0729 18:27:36.739239   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:38.740812   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:35.543428   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:38.042603   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:37.622330   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.622738   77394 main.go:141] libmachine: (no-preload-888056) Found IP for machine: 192.168.72.80
	I0729 18:27:37.622767   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has current primary IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.622779   77394 main.go:141] libmachine: (no-preload-888056) Reserving static IP address...
	I0729 18:27:37.623108   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "no-preload-888056", mac: "52:54:00:b2:b0:1a", ip: "192.168.72.80"} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.623144   77394 main.go:141] libmachine: (no-preload-888056) DBG | skip adding static IP to network mk-no-preload-888056 - found existing host DHCP lease matching {name: "no-preload-888056", mac: "52:54:00:b2:b0:1a", ip: "192.168.72.80"}
	I0729 18:27:37.623160   77394 main.go:141] libmachine: (no-preload-888056) Reserved static IP address: 192.168.72.80
	I0729 18:27:37.623174   77394 main.go:141] libmachine: (no-preload-888056) Waiting for SSH to be available...
	I0729 18:27:37.623183   77394 main.go:141] libmachine: (no-preload-888056) DBG | Getting to WaitForSSH function...
	I0729 18:27:37.625391   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.625732   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.625759   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.625927   77394 main.go:141] libmachine: (no-preload-888056) DBG | Using SSH client type: external
	I0729 18:27:37.625948   77394 main.go:141] libmachine: (no-preload-888056) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa (-rw-------)
	I0729 18:27:37.625994   77394 main.go:141] libmachine: (no-preload-888056) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:27:37.626008   77394 main.go:141] libmachine: (no-preload-888056) DBG | About to run SSH command:
	I0729 18:27:37.626020   77394 main.go:141] libmachine: (no-preload-888056) DBG | exit 0
	I0729 18:27:37.750587   77394 main.go:141] libmachine: (no-preload-888056) DBG | SSH cmd err, output: <nil>: 
	I0729 18:27:37.750986   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetConfigRaw
	I0729 18:27:37.751717   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:37.754387   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.754753   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.754781   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.754995   77394 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/config.json ...
	I0729 18:27:37.755184   77394 machine.go:94] provisionDockerMachine start ...
	I0729 18:27:37.755207   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:37.755397   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.757649   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.757965   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.757988   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.758128   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.758297   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.758463   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.758599   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.758754   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.758918   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.758927   77394 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:27:37.862940   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:27:37.862976   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:37.863205   77394 buildroot.go:166] provisioning hostname "no-preload-888056"
	I0729 18:27:37.863234   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:37.863425   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.866190   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.866538   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.866565   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.866705   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.866878   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.867046   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.867166   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.867307   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.867478   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.867490   77394 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-888056 && echo "no-preload-888056" | sudo tee /etc/hostname
	I0729 18:27:37.985031   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-888056
	
	I0729 18:27:37.985070   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.987577   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.987917   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.987945   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.988126   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.988311   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.988469   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.988601   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.988786   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.988994   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.989012   77394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-888056' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-888056/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-888056' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:27:38.103831   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:27:38.103853   77394 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:27:38.103870   77394 buildroot.go:174] setting up certificates
	I0729 18:27:38.103878   77394 provision.go:84] configureAuth start
	I0729 18:27:38.103886   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:38.104166   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:38.107080   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.107493   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.107521   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.107690   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.110087   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.110495   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.110520   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.110738   77394 provision.go:143] copyHostCerts
	I0729 18:27:38.110793   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:27:38.110802   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:27:38.110853   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:27:38.110968   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:27:38.110978   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:27:38.110998   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:27:38.111056   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:27:38.111063   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:27:38.111080   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:27:38.111149   77394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.no-preload-888056 san=[127.0.0.1 192.168.72.80 localhost minikube no-preload-888056]
	I0729 18:27:38.327305   77394 provision.go:177] copyRemoteCerts
	I0729 18:27:38.327378   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:27:38.327407   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.330008   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.330304   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.330327   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.330516   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.330739   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.330908   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.331071   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:38.414678   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:27:38.443418   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:27:38.469248   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 18:27:38.494014   77394 provision.go:87] duration metric: took 390.106553ms to configureAuth
	I0729 18:27:38.494049   77394 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:27:38.494245   77394 config.go:182] Loaded profile config "no-preload-888056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:27:38.494357   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.497162   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.497586   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.497620   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.497946   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.498137   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.498328   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.498566   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.498766   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:38.498940   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:38.498955   77394 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:27:38.762438   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:27:38.762462   77394 machine.go:97] duration metric: took 1.007266999s to provisionDockerMachine
	I0729 18:27:38.762473   77394 start.go:293] postStartSetup for "no-preload-888056" (driver="kvm2")
	I0729 18:27:38.762484   77394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:27:38.762511   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:38.762797   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:27:38.762832   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.765677   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.766031   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.766054   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.766222   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.766432   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.766621   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.766774   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:38.854492   77394 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:27:38.858934   77394 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:27:38.858962   77394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:27:38.859041   77394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:27:38.859136   77394 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:27:38.859251   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:27:38.869459   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:38.894422   77394 start.go:296] duration metric: took 131.935433ms for postStartSetup
	I0729 18:27:38.894466   77394 fix.go:56] duration metric: took 19.034987866s for fixHost
	I0729 18:27:38.894492   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.897266   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.897654   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.897684   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.897890   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.898102   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.898250   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.898356   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.898547   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:38.898721   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:38.898732   77394 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:27:39.003526   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277658.970659996
	
	I0729 18:27:39.003571   77394 fix.go:216] guest clock: 1722277658.970659996
	I0729 18:27:39.003581   77394 fix.go:229] Guest: 2024-07-29 18:27:38.970659996 +0000 UTC Remote: 2024-07-29 18:27:38.8944731 +0000 UTC m=+356.533366653 (delta=76.186896ms)
	I0729 18:27:39.003600   77394 fix.go:200] guest clock delta is within tolerance: 76.186896ms
	I0729 18:27:39.003605   77394 start.go:83] releasing machines lock for "no-preload-888056", held for 19.144159359s
	I0729 18:27:39.003622   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.003881   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:39.006550   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.006850   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.006886   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.007005   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007597   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007779   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007879   77394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:27:39.007939   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:39.008001   77394 ssh_runner.go:195] Run: cat /version.json
	I0729 18:27:39.008026   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:39.010634   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.010941   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.010965   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.010984   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.011257   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:39.011442   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.011474   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.011487   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:39.011632   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:39.011678   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:39.011782   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:39.011951   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:39.011985   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:39.012094   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:39.114446   77394 ssh_runner.go:195] Run: systemctl --version
	I0729 18:27:39.120848   77394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:39.266976   77394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:39.273603   77394 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:39.273670   77394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:39.295511   77394 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:39.295533   77394 start.go:495] detecting cgroup driver to use...
	I0729 18:27:39.295593   77394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:39.313692   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:39.328435   77394 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:39.328502   77394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:39.342580   77394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:39.356694   77394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:39.474555   77394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:39.632766   77394 docker.go:233] disabling docker service ...
	I0729 18:27:39.632827   77394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:39.648961   77394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:39.663277   77394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:39.813329   77394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:39.944017   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:39.957624   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:39.976348   77394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 18:27:39.976401   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:39.986672   77394 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:39.986735   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:39.996867   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.007547   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.018141   77394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:40.029258   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.040007   77394 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.057611   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.068107   77394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:40.077798   77394 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:40.077877   77394 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:40.091040   77394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:40.100846   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:40.227049   77394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:40.368213   77394 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:40.368295   77394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:40.374168   77394 start.go:563] Will wait 60s for crictl version
	I0729 18:27:40.374239   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.378268   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:40.422500   77394 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:40.422579   77394 ssh_runner.go:195] Run: crio --version
	I0729 18:27:40.451170   77394 ssh_runner.go:195] Run: crio --version
	I0729 18:27:40.481789   77394 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 18:27:37.682180   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:38.182453   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:38.682639   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:39.182874   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:39.682496   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.182727   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.683073   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:41.182060   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:41.682421   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:42.182813   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.483209   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:40.486303   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:40.486738   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:40.486768   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:40.487032   77394 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:40.491318   77394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:40.505196   77394 kubeadm.go:883] updating cluster {Name:no-preload-888056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:40.505303   77394 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 18:27:40.505333   77394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:40.541356   77394 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 18:27:40.541380   77394 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:27:40.541445   77394 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.541452   77394 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.541465   77394 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.541495   77394 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.541503   77394 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.541527   77394 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.541583   77394 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.542060   77394 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 18:27:40.543507   77394 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.543519   77394 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.543505   77394 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 18:27:40.543535   77394 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.543504   77394 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.543761   77394 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.543799   77394 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.543999   77394 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.693026   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.709057   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.715664   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.720337   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.746126   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 18:27:40.748805   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.759200   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.768613   77394 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 18:27:40.768659   77394 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.768705   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.812940   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.852143   77394 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 18:27:40.852173   77394 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 18:27:40.852191   77394 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.852206   77394 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.852237   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.852249   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.890477   77394 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 18:27:40.890521   77394 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.890566   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991390   77394 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 18:27:40.991435   77394 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.991462   77394 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 18:27:40.991486   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991501   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.991508   77394 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.991548   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991556   77394 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 18:27:40.991579   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.991595   77394 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.991609   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.991654   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991694   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:41.087626   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:41.087736   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:41.087742   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:41.087782   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:41.087819   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 18:27:41.087830   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:41.087883   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.091774   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 18:27:41.091828   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:41.091858   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:41.091873   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:41.104679   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 18:27:41.104702   77394 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.104733   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 18:27:41.104750   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.155992   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 18:27:41.156114   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 18:27:41.156227   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:41.169410   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 18:27:41.169535   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:41.176103   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:41.176116   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 18:27:41.176214   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:41.241044   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:43.739887   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:40.543004   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:43.044338   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:42.682911   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:43.182279   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:43.682506   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.182109   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.682593   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:45.183002   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:45.682275   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:46.182491   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:46.683027   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:47.182311   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.874768   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.769989933s)
	I0729 18:27:44.874798   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 18:27:44.874827   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:44.874861   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (3.71860957s)
	I0729 18:27:44.874894   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 18:27:44.874906   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:44.874930   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.705380577s)
	I0729 18:27:44.874947   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 18:27:44.874972   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (3.698734733s)
	I0729 18:27:44.875001   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 18:27:46.333065   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.458135446s)
	I0729 18:27:46.333109   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 18:27:46.333137   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:46.333175   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:45.739935   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.740654   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:45.542272   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.543683   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.682979   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.183024   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.682708   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:49.182427   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:49.682335   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:50.182146   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:50.682716   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:51.182231   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:51.683106   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:52.182739   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.194389   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.861190748s)
	I0729 18:27:48.194419   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 18:27:48.194443   77394 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:48.194483   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:50.159353   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.964849018s)
	I0729 18:27:50.159384   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 18:27:50.159427   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:50.159494   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:52.256998   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.097482067s)
	I0729 18:27:52.257038   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 18:27:52.257075   77394 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:52.257125   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:50.239878   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.740167   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:50.042299   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.042567   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:54.043462   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.682628   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:53.182081   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:53.682919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:54.183194   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:54.682506   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:55.182992   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:55.682152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:56.183083   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:56.682897   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.182789   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:52.899503   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 18:27:52.899539   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:52.899594   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:54.868011   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.968389841s)
	I0729 18:27:54.868043   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 18:27:54.868075   77394 cache_images.go:123] Successfully loaded all cached images
	I0729 18:27:54.868080   77394 cache_images.go:92] duration metric: took 14.326689217s to LoadCachedImages
	I0729 18:27:54.868088   77394 kubeadm.go:934] updating node { 192.168.72.80 8443 v1.31.0-beta.0 crio true true} ...
	I0729 18:27:54.868226   77394 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-888056 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:54.868305   77394 ssh_runner.go:195] Run: crio config
	I0729 18:27:54.928569   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:27:54.928591   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:54.928604   77394 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:54.928633   77394 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.80 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-888056 NodeName:no-preload-888056 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:27:54.928800   77394 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-888056"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:54.928871   77394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 18:27:54.939479   77394 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:54.939534   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:54.948928   77394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0729 18:27:54.966700   77394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 18:27:54.984218   77394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0729 18:27:55.000813   77394 ssh_runner.go:195] Run: grep 192.168.72.80	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:55.004529   77394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:55.016140   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:55.141053   77394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:55.158874   77394 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056 for IP: 192.168.72.80
	I0729 18:27:55.158897   77394 certs.go:194] generating shared ca certs ...
	I0729 18:27:55.158918   77394 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:55.159074   77394 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:55.159136   77394 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:55.159150   77394 certs.go:256] generating profile certs ...
	I0729 18:27:55.159245   77394 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/client.key
	I0729 18:27:55.159320   77394 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.key.f09a151f
	I0729 18:27:55.159373   77394 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.key
	I0729 18:27:55.159511   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:55.159552   77394 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:55.159566   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:55.159600   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:55.159641   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:55.159680   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:55.159734   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:55.160575   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:55.211823   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:55.248637   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:55.287972   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:55.317920   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 18:27:55.346034   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:55.377569   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:55.402593   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 18:27:55.427969   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:55.452060   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:55.476635   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:55.500831   77394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:55.518744   77394 ssh_runner.go:195] Run: openssl version
	I0729 18:27:55.524865   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:55.536601   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.541752   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.541807   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.548070   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:55.559866   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:55.571833   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.576304   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.576342   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.582204   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:55.594531   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:55.605773   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.610585   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.610633   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.616478   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:55.628160   77394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:55.632691   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:55.638793   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:55.644678   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:55.651117   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:55.657397   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:55.663351   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:55.670080   77394 kubeadm.go:392] StartCluster: {Name:no-preload-888056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:55.670183   77394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:55.670248   77394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:55.712280   77394 cri.go:89] found id: ""
	I0729 18:27:55.712343   77394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:55.722878   77394 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:55.722898   77394 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:55.722935   77394 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:55.732704   77394 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:55.733646   77394 kubeconfig.go:125] found "no-preload-888056" server: "https://192.168.72.80:8443"
	I0729 18:27:55.736512   77394 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:55.748360   77394 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.80
	I0729 18:27:55.748403   77394 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:55.748416   77394 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:55.748464   77394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:55.789773   77394 cri.go:89] found id: ""
	I0729 18:27:55.789854   77394 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:55.808905   77394 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:55.819969   77394 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:55.819991   77394 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:55.820064   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:27:55.829392   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:55.829445   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:55.838934   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:27:55.848659   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:55.848720   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:55.859490   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:27:55.870024   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:55.870076   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:55.881599   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:27:55.891805   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:55.891869   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:55.901750   77394 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:55.911525   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:56.021031   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.075545   77394 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.054482988s)
	I0729 18:27:57.075571   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.302701   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.382837   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:55.261397   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:57.738688   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:59.739828   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:56.543870   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:59.043285   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:57.682237   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.182211   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.682456   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:59.182669   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:59.682863   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:00.182261   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:00.682993   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:01.182832   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:01.682899   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:02.182765   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.492480   77394 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:57.492580   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.993240   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.492965   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.517442   77394 api_server.go:72] duration metric: took 1.024961129s to wait for apiserver process to appear ...
	I0729 18:27:58.517479   77394 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:27:58.517505   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:27:58.518046   77394 api_server.go:269] stopped: https://192.168.72.80:8443/healthz: Get "https://192.168.72.80:8443/healthz": dial tcp 192.168.72.80:8443: connect: connection refused
	I0729 18:27:59.017614   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.088238   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:28:02.088265   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:28:02.088277   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.147855   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:28:02.147882   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:28:02.518439   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.525213   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:28:02.525247   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:28:03.018275   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:03.024993   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:28:03.025023   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:28:03.517564   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:03.523409   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 200:
	ok
	I0729 18:28:03.529656   77394 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 18:28:03.529687   77394 api_server.go:131] duration metric: took 5.01219984s to wait for apiserver health ...
	I0729 18:28:03.529698   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:28:03.529706   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:28:03.531527   77394 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:28:01.740935   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:03.743806   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:01.043882   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:03.542540   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:02.682331   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.182154   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.682499   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:04.182355   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:04.682338   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:05.182107   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:05.683125   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:06.182481   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:06.683153   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:07.182992   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.532788   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:28:03.544878   77394 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:28:03.586100   77394 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:28:03.604975   77394 system_pods.go:59] 8 kube-system pods found
	I0729 18:28:03.605012   77394 system_pods.go:61] "coredns-5cfdc65f69-bg5j4" [7a26ffbb-014c-4cf7-b302-214cf78374bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:28:03.605022   77394 system_pods.go:61] "etcd-no-preload-888056" [d76f2eb7-67d9-4ba0-8d2f-acfc78559651] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:28:03.605036   77394 system_pods.go:61] "kube-apiserver-no-preload-888056" [1dbea0ee-58be-47ca-b4ab-94065413768d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:28:03.605044   77394 system_pods.go:61] "kube-controller-manager-no-preload-888056" [fb8ce9d9-2953-4b91-8734-87bd38a63eb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:28:03.605051   77394 system_pods.go:61] "kube-proxy-w5z2f" [2425da76-cf2d-41c9-b8db-1370ab5333c5] Running
	I0729 18:28:03.605059   77394 system_pods.go:61] "kube-scheduler-no-preload-888056" [9958567f-116d-4094-9e7e-6208f7358486] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:28:03.605066   77394 system_pods.go:61] "metrics-server-78fcd8795b-jcdcw" [c506a5f8-d569-4c3d-9b6e-21b9fc63a86a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:28:03.605073   77394 system_pods.go:61] "storage-provisioner" [ccbc4fa6-1237-46ca-ac80-34972b9a43df] Running
	I0729 18:28:03.605082   77394 system_pods.go:74] duration metric: took 18.959807ms to wait for pod list to return data ...
	I0729 18:28:03.605095   77394 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:28:03.609225   77394 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:28:03.609249   77394 node_conditions.go:123] node cpu capacity is 2
	I0729 18:28:03.609261   77394 node_conditions.go:105] duration metric: took 4.16099ms to run NodePressure ...
	I0729 18:28:03.609278   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:28:03.881440   77394 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:28:03.886401   77394 kubeadm.go:739] kubelet initialised
	I0729 18:28:03.886429   77394 kubeadm.go:740] duration metric: took 4.958282ms waiting for restarted kubelet to initialise ...
	I0729 18:28:03.886440   77394 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:28:03.891373   77394 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:05.900595   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:06.239029   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:08.240309   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:06.042541   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:08.043322   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:07.682582   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.182094   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.682613   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:09.182936   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:09.682444   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:10.182354   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:10.682183   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:11.182502   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:11.682466   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:12.182113   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.397084   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.399546   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.897981   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:10.898006   77394 pod_ready.go:81] duration metric: took 7.006606905s for pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.898014   77394 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.903064   77394 pod_ready.go:92] pod "etcd-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:10.903088   77394 pod_ready.go:81] duration metric: took 5.066249ms for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.903099   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:11.409319   77394 pod_ready.go:92] pod "kube-apiserver-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:11.409344   77394 pod_ready.go:81] duration metric: took 506.238678ms for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:11.409353   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.250001   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:12.741099   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.542146   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:13.042422   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:12.682526   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.183014   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.682449   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:14.182138   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:14.683065   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:15.182838   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:15.682680   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:16.182714   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:16.682116   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:17.182842   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.415469   77394 pod_ready.go:102] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:13.917111   77394 pod_ready.go:92] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.917134   77394 pod_ready.go:81] duration metric: took 2.507774546s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.917149   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w5z2f" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.922045   77394 pod_ready.go:92] pod "kube-proxy-w5z2f" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.922069   77394 pod_ready.go:81] duration metric: took 4.912892ms for pod "kube-proxy-w5z2f" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.922080   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.927633   77394 pod_ready.go:92] pod "kube-scheduler-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.927654   77394 pod_ready.go:81] duration metric: took 5.565409ms for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.927666   77394 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:15.934081   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:15.240105   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.740031   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:19.740077   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:15.042540   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.043335   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:19.542061   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.683114   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:18.182919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:18.683103   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:19.182074   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:19.683031   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:20.182701   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:20.682749   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:21.182949   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:21.683001   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:22.182167   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:17.935797   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:20.434416   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:21.740735   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.238828   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:21.544060   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.042058   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:22.682723   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:23.182510   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:23.683084   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:24.182220   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:24.682699   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:25.182288   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:25.682433   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:26.182919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:26.682851   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:27.182225   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:22.435465   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.935088   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:26.239694   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:28.240174   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:26.542381   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:29.043706   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:27.682408   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:28.182187   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:28.683034   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:29.182922   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:29.682990   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:29.683063   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:29.730368   78080 cri.go:89] found id: ""
	I0729 18:28:29.730405   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.730413   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:29.730419   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:29.730473   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:29.770368   78080 cri.go:89] found id: ""
	I0729 18:28:29.770398   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.770409   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:29.770426   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:29.770479   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:29.809873   78080 cri.go:89] found id: ""
	I0729 18:28:29.809898   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.809906   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:29.809911   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:29.809970   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:29.848980   78080 cri.go:89] found id: ""
	I0729 18:28:29.849006   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.849016   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:29.849023   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:29.849082   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:29.887261   78080 cri.go:89] found id: ""
	I0729 18:28:29.887292   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.887302   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:29.887311   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:29.887388   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:29.927011   78080 cri.go:89] found id: ""
	I0729 18:28:29.927041   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.927051   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:29.927058   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:29.927122   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:29.965577   78080 cri.go:89] found id: ""
	I0729 18:28:29.965609   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.965619   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:29.965625   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:29.965693   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:29.999180   78080 cri.go:89] found id: ""
	I0729 18:28:29.999210   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.999222   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:29.999233   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:29.999253   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:30.049401   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:30.049433   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:30.063903   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:30.063939   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:30.194776   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:30.194797   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:30.194812   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:30.261861   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:30.261906   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:27.434837   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:29.435257   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:31.435297   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:30.738940   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:32.740748   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:31.542494   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:33.542872   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:32.801821   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:32.814741   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:32.814815   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:32.853490   78080 cri.go:89] found id: ""
	I0729 18:28:32.853514   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.853522   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:32.853530   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:32.853580   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:32.890314   78080 cri.go:89] found id: ""
	I0729 18:28:32.890339   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.890349   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:32.890356   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:32.890435   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:32.928231   78080 cri.go:89] found id: ""
	I0729 18:28:32.928255   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.928262   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:32.928268   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:32.928314   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:32.964024   78080 cri.go:89] found id: ""
	I0729 18:28:32.964054   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.964065   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:32.964072   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:32.964136   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:33.002099   78080 cri.go:89] found id: ""
	I0729 18:28:33.002127   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.002140   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:33.002146   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:33.002195   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:33.042238   78080 cri.go:89] found id: ""
	I0729 18:28:33.042265   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.042273   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:33.042278   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:33.042331   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:33.078715   78080 cri.go:89] found id: ""
	I0729 18:28:33.078741   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.078750   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:33.078756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:33.078816   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:33.123304   78080 cri.go:89] found id: ""
	I0729 18:28:33.123334   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.123342   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:33.123351   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:33.123366   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:33.198950   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:33.198994   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:33.223566   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:33.223594   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:33.306500   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:33.306526   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:33.306541   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:33.379386   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:33.379421   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:35.926834   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:35.942218   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:35.942296   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:35.980115   78080 cri.go:89] found id: ""
	I0729 18:28:35.980142   78080 logs.go:276] 0 containers: []
	W0729 18:28:35.980153   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:35.980159   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:35.980221   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:36.015354   78080 cri.go:89] found id: ""
	I0729 18:28:36.015379   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.015387   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:36.015392   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:36.015456   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:36.056411   78080 cri.go:89] found id: ""
	I0729 18:28:36.056435   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.056445   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:36.056451   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:36.056499   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:36.099153   78080 cri.go:89] found id: ""
	I0729 18:28:36.099180   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.099188   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:36.099193   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:36.099241   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:36.133427   78080 cri.go:89] found id: ""
	I0729 18:28:36.133459   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.133470   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:36.133477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:36.133544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:36.168619   78080 cri.go:89] found id: ""
	I0729 18:28:36.168646   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.168657   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:36.168664   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:36.168723   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:36.203636   78080 cri.go:89] found id: ""
	I0729 18:28:36.203666   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.203676   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:36.203684   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:36.203747   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:36.246495   78080 cri.go:89] found id: ""
	I0729 18:28:36.246523   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.246533   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:36.246544   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:36.246561   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:36.260630   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:36.260656   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:36.337406   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:36.337424   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:36.337435   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:36.410016   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:36.410049   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:36.453458   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:36.453492   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:33.435859   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.934955   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.240070   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:37.739406   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.740035   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.543153   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:37.543467   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.543573   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.004147   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:39.018217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:39.018279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:39.054130   78080 cri.go:89] found id: ""
	I0729 18:28:39.054155   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.054166   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:39.054172   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:39.054219   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:39.090458   78080 cri.go:89] found id: ""
	I0729 18:28:39.090482   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.090490   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:39.090501   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:39.090548   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:39.126933   78080 cri.go:89] found id: ""
	I0729 18:28:39.126960   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.126971   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:39.126978   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:39.127042   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:39.162324   78080 cri.go:89] found id: ""
	I0729 18:28:39.162352   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.162381   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:39.162389   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:39.162450   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:39.202440   78080 cri.go:89] found id: ""
	I0729 18:28:39.202464   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.202471   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:39.202477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:39.202537   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:39.238314   78080 cri.go:89] found id: ""
	I0729 18:28:39.238342   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.238352   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:39.238368   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:39.238436   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:39.275545   78080 cri.go:89] found id: ""
	I0729 18:28:39.275584   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.275592   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:39.275598   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:39.275663   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:39.311575   78080 cri.go:89] found id: ""
	I0729 18:28:39.311603   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.311614   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:39.311624   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:39.311643   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:39.367667   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:39.367711   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:39.381823   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:39.381852   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:39.456060   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:39.456083   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:39.456100   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:39.531747   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:39.531784   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:42.077771   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:42.092424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:42.092512   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:42.128710   78080 cri.go:89] found id: ""
	I0729 18:28:42.128744   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.128756   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:42.128765   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:42.128834   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:42.166092   78080 cri.go:89] found id: ""
	I0729 18:28:42.166126   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.166133   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:42.166138   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:42.166186   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:42.200955   78080 cri.go:89] found id: ""
	I0729 18:28:42.200981   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.200989   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:42.200994   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:42.201053   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:38.435476   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:40.935166   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:42.240354   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:44.739322   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:41.543640   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:43.543781   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:42.240176   78080 cri.go:89] found id: ""
	I0729 18:28:42.240203   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.240212   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:42.240219   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:42.240279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:42.279844   78080 cri.go:89] found id: ""
	I0729 18:28:42.279872   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.279880   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:42.279885   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:42.279946   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:42.313071   78080 cri.go:89] found id: ""
	I0729 18:28:42.313099   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.313108   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:42.313114   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:42.313187   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:42.348540   78080 cri.go:89] found id: ""
	I0729 18:28:42.348566   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.348573   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:42.348580   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:42.348630   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:42.384688   78080 cri.go:89] found id: ""
	I0729 18:28:42.384714   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.384725   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:42.384736   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:42.384750   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:42.399178   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:42.399206   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:42.472903   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:42.472921   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:42.472937   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:42.558541   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:42.558573   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:42.599403   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:42.599432   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:45.154026   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:45.167130   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:45.167200   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:45.203627   78080 cri.go:89] found id: ""
	I0729 18:28:45.203654   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.203663   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:45.203668   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:45.203714   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:45.242293   78080 cri.go:89] found id: ""
	I0729 18:28:45.242316   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.242325   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:45.242332   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:45.242403   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:45.282253   78080 cri.go:89] found id: ""
	I0729 18:28:45.282275   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.282282   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:45.282288   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:45.282335   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:45.320151   78080 cri.go:89] found id: ""
	I0729 18:28:45.320175   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.320183   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:45.320189   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:45.320250   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:45.356210   78080 cri.go:89] found id: ""
	I0729 18:28:45.356236   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.356247   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:45.356254   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:45.356316   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:45.393083   78080 cri.go:89] found id: ""
	I0729 18:28:45.393116   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.393131   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:45.393139   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:45.393199   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:45.430235   78080 cri.go:89] found id: ""
	I0729 18:28:45.430263   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.430274   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:45.430282   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:45.430346   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:45.463068   78080 cri.go:89] found id: ""
	I0729 18:28:45.463132   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.463143   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:45.463155   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:45.463203   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:45.541411   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:45.541441   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:45.581967   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:45.582001   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:45.639427   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:45.639459   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:45.655715   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:45.655741   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:45.725820   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:42.943815   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:45.435444   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:46.739873   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:49.240293   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:46.042576   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:48.042735   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:48.226252   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:48.240419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:48.240494   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:48.271506   78080 cri.go:89] found id: ""
	I0729 18:28:48.271538   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.271550   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:48.271557   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:48.271615   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:48.305163   78080 cri.go:89] found id: ""
	I0729 18:28:48.305186   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.305198   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:48.305203   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:48.305252   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:48.336453   78080 cri.go:89] found id: ""
	I0729 18:28:48.336480   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.336492   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:48.336500   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:48.336557   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:48.368690   78080 cri.go:89] found id: ""
	I0729 18:28:48.368713   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.368720   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:48.368725   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:48.368784   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:48.401723   78080 cri.go:89] found id: ""
	I0729 18:28:48.401746   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.401753   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:48.401758   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:48.401822   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:48.439876   78080 cri.go:89] found id: ""
	I0729 18:28:48.439896   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.439903   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:48.439908   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:48.439956   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:48.473352   78080 cri.go:89] found id: ""
	I0729 18:28:48.473383   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.473394   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:48.473401   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:48.473461   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:48.506752   78080 cri.go:89] found id: ""
	I0729 18:28:48.506779   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.506788   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:48.506799   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:48.506815   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:48.547513   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:48.547535   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:48.599704   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:48.599733   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:48.613577   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:48.613604   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:48.681272   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:48.681290   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:48.681301   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:51.267397   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:51.280243   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:51.280317   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:51.314047   78080 cri.go:89] found id: ""
	I0729 18:28:51.314078   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.314090   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:51.314097   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:51.314162   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:51.346048   78080 cri.go:89] found id: ""
	I0729 18:28:51.346073   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.346080   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:51.346085   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:51.346144   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:51.380511   78080 cri.go:89] found id: ""
	I0729 18:28:51.380543   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.380553   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:51.380561   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:51.380637   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:51.415189   78080 cri.go:89] found id: ""
	I0729 18:28:51.415213   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.415220   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:51.415227   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:51.415310   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:51.454324   78080 cri.go:89] found id: ""
	I0729 18:28:51.454351   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.454380   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:51.454388   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:51.454449   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:51.488737   78080 cri.go:89] found id: ""
	I0729 18:28:51.488768   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.488779   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:51.488787   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:51.488848   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:51.528869   78080 cri.go:89] found id: ""
	I0729 18:28:51.528903   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.528912   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:51.528920   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:51.528972   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:51.566039   78080 cri.go:89] found id: ""
	I0729 18:28:51.566067   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.566075   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:51.566086   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:51.566102   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:51.604746   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:51.604774   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:51.661048   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:51.661089   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:51.675420   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:51.675447   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:51.754496   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:51.754531   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:51.754548   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:47.934575   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:49.935187   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:51.247773   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:53.740386   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:50.043378   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:52.543104   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:54.335796   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:54.350726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:54.350784   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:54.389661   78080 cri.go:89] found id: ""
	I0729 18:28:54.389683   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.389694   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:54.389701   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:54.389761   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:54.427073   78080 cri.go:89] found id: ""
	I0729 18:28:54.427100   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.427110   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:54.427117   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:54.427178   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:54.466761   78080 cri.go:89] found id: ""
	I0729 18:28:54.466793   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.466802   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:54.466808   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:54.466871   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:54.501115   78080 cri.go:89] found id: ""
	I0729 18:28:54.501144   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.501159   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:54.501167   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:54.501229   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:54.535430   78080 cri.go:89] found id: ""
	I0729 18:28:54.535461   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.535472   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:54.535480   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:54.535543   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:54.574994   78080 cri.go:89] found id: ""
	I0729 18:28:54.575024   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.575034   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:54.575041   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:54.575107   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:54.608770   78080 cri.go:89] found id: ""
	I0729 18:28:54.608792   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.608800   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:54.608805   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:54.608850   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:54.648026   78080 cri.go:89] found id: ""
	I0729 18:28:54.648050   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.648057   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:54.648066   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:54.648077   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:54.728445   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:54.728485   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:54.774752   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:54.774781   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:54.826549   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:54.826582   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:54.840366   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:54.840394   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:54.907422   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:52.434956   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:54.436125   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:56.933929   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:56.239045   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:58.239967   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:55.041898   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:57.042968   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:59.542837   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:57.408469   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:57.421855   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:57.421923   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:57.457794   78080 cri.go:89] found id: ""
	I0729 18:28:57.457816   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.457824   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:57.457829   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:57.457908   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:57.492851   78080 cri.go:89] found id: ""
	I0729 18:28:57.492880   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.492888   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:57.492894   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:57.492946   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:57.528221   78080 cri.go:89] found id: ""
	I0729 18:28:57.528249   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.528258   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:57.528265   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:57.528330   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:57.565504   78080 cri.go:89] found id: ""
	I0729 18:28:57.565536   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.565547   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:57.565554   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:57.565618   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:57.599391   78080 cri.go:89] found id: ""
	I0729 18:28:57.599418   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.599426   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:57.599432   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:57.599491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:57.643757   78080 cri.go:89] found id: ""
	I0729 18:28:57.643784   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.643798   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:57.643806   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:57.643867   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:57.680825   78080 cri.go:89] found id: ""
	I0729 18:28:57.680853   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.680864   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:57.680871   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:57.680936   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:57.714450   78080 cri.go:89] found id: ""
	I0729 18:28:57.714479   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.714490   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:57.714500   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:57.714516   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:57.798411   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:57.798437   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:57.798453   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:57.878210   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:57.878246   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:57.917476   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:57.917505   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:57.971395   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:57.971432   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:00.486419   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:00.500625   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:00.500703   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:00.539625   78080 cri.go:89] found id: ""
	I0729 18:29:00.539650   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.539659   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:00.539682   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:00.539737   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:00.577252   78080 cri.go:89] found id: ""
	I0729 18:29:00.577284   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.577297   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:00.577303   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:00.577350   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:00.611850   78080 cri.go:89] found id: ""
	I0729 18:29:00.611878   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.611886   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:00.611892   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:00.611939   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:00.648964   78080 cri.go:89] found id: ""
	I0729 18:29:00.648989   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.648996   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:00.649003   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:00.649062   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:00.686124   78080 cri.go:89] found id: ""
	I0729 18:29:00.686147   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.686156   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:00.686161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:00.686217   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:00.721166   78080 cri.go:89] found id: ""
	I0729 18:29:00.721195   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.721205   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:00.721213   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:00.721276   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:00.758394   78080 cri.go:89] found id: ""
	I0729 18:29:00.758423   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.758431   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:00.758436   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:00.758491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:00.793487   78080 cri.go:89] found id: ""
	I0729 18:29:00.793514   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.793523   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:00.793533   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:00.793549   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:00.807069   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:00.807106   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:00.880611   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:00.880629   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:00.880641   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:00.963534   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:00.963568   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:01.004145   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:01.004174   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:58.933964   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:00.934221   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:00.739676   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:02.741020   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:02.042346   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:04.541902   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:03.560985   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:03.574407   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:03.574476   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:03.608027   78080 cri.go:89] found id: ""
	I0729 18:29:03.608048   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.608057   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:03.608062   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:03.608119   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:03.644777   78080 cri.go:89] found id: ""
	I0729 18:29:03.644804   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.644814   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:03.644821   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:03.644895   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:03.684050   78080 cri.go:89] found id: ""
	I0729 18:29:03.684074   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.684082   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:03.684089   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:03.684149   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:03.724350   78080 cri.go:89] found id: ""
	I0729 18:29:03.724376   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.724383   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:03.724390   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:03.724439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:03.766859   78080 cri.go:89] found id: ""
	I0729 18:29:03.766887   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.766898   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:03.766905   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:03.766967   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:03.800535   78080 cri.go:89] found id: ""
	I0729 18:29:03.800562   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.800572   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:03.800579   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:03.800639   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:03.834991   78080 cri.go:89] found id: ""
	I0729 18:29:03.835011   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.835019   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:03.835024   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:03.835073   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:03.869159   78080 cri.go:89] found id: ""
	I0729 18:29:03.869191   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.869201   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:03.869211   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:03.869226   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:03.940451   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:03.940469   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:03.940487   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:04.020880   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:04.020910   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:04.064707   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:04.064728   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:04.121551   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:04.121587   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:06.636983   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:06.651500   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:06.651582   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:06.686556   78080 cri.go:89] found id: ""
	I0729 18:29:06.686582   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.686592   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:06.686599   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:06.686660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:06.721967   78080 cri.go:89] found id: ""
	I0729 18:29:06.721996   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.722008   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:06.722016   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:06.722115   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:06.760409   78080 cri.go:89] found id: ""
	I0729 18:29:06.760433   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.760440   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:06.760445   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:06.760499   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:06.794050   78080 cri.go:89] found id: ""
	I0729 18:29:06.794074   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.794081   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:06.794087   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:06.794143   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:06.826445   78080 cri.go:89] found id: ""
	I0729 18:29:06.826471   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.826478   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:06.826484   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:06.826544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:06.860680   78080 cri.go:89] found id: ""
	I0729 18:29:06.860700   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.860706   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:06.860712   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:06.860761   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:06.898192   78080 cri.go:89] found id: ""
	I0729 18:29:06.898215   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.898223   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:06.898229   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:06.898284   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:06.931892   78080 cri.go:89] found id: ""
	I0729 18:29:06.931920   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.931930   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:06.931940   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:06.931955   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:06.987265   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:06.987294   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:07.043520   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:07.043547   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:07.056995   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:07.057019   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:07.124932   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:07.124956   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:07.124971   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:03.435778   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:05.936004   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:05.239352   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:07.239383   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:06.542526   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:08.543497   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:09.708947   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:09.723497   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:09.723565   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:09.762686   78080 cri.go:89] found id: ""
	I0729 18:29:09.762714   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.762725   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:09.762733   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:09.762797   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:09.799674   78080 cri.go:89] found id: ""
	I0729 18:29:09.799699   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.799708   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:09.799715   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:09.799775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:09.836121   78080 cri.go:89] found id: ""
	I0729 18:29:09.836147   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.836156   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:09.836161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:09.836209   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:09.872758   78080 cri.go:89] found id: ""
	I0729 18:29:09.872783   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.872791   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:09.872797   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:09.872842   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:09.911681   78080 cri.go:89] found id: ""
	I0729 18:29:09.911711   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.911719   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:09.911724   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:09.911773   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:09.951531   78080 cri.go:89] found id: ""
	I0729 18:29:09.951554   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.951561   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:09.951567   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:09.951624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:09.985568   78080 cri.go:89] found id: ""
	I0729 18:29:09.985597   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.985606   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:09.985612   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:09.985661   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:10.020369   78080 cri.go:89] found id: ""
	I0729 18:29:10.020394   78080 logs.go:276] 0 containers: []
	W0729 18:29:10.020402   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:10.020409   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:10.020421   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:10.076538   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:10.076574   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:10.090954   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:10.090980   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:10.165843   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:10.165875   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:10.165890   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:10.242438   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:10.242469   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:08.434575   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:10.934523   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:09.744446   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:12.239540   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:14.242060   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:10.544272   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:13.043064   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:12.781369   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:12.797066   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:12.797160   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:12.832500   78080 cri.go:89] found id: ""
	I0729 18:29:12.832528   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.832545   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:12.832552   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:12.832615   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:12.866390   78080 cri.go:89] found id: ""
	I0729 18:29:12.866420   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.866428   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:12.866434   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:12.866494   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:12.901616   78080 cri.go:89] found id: ""
	I0729 18:29:12.901636   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.901644   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:12.901649   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:12.901713   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:12.935954   78080 cri.go:89] found id: ""
	I0729 18:29:12.935976   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.935985   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:12.935993   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:12.936053   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:12.970570   78080 cri.go:89] found id: ""
	I0729 18:29:12.970623   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.970637   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:12.970645   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:12.970702   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:13.008629   78080 cri.go:89] found id: ""
	I0729 18:29:13.008658   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.008666   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:13.008672   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:13.008725   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:13.045689   78080 cri.go:89] found id: ""
	I0729 18:29:13.045713   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.045721   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:13.045726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:13.045773   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:13.084707   78080 cri.go:89] found id: ""
	I0729 18:29:13.084735   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.084745   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:13.084756   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:13.084774   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:13.161884   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:13.161920   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:13.205377   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:13.205410   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:13.258161   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:13.258189   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:13.272208   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:13.272240   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:13.347519   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:15.848068   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:15.861773   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:15.861851   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:15.902421   78080 cri.go:89] found id: ""
	I0729 18:29:15.902449   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.902458   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:15.902466   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:15.902532   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:15.939552   78080 cri.go:89] found id: ""
	I0729 18:29:15.939576   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.939583   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:15.939588   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:15.939645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:15.974424   78080 cri.go:89] found id: ""
	I0729 18:29:15.974454   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.974463   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:15.974468   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:15.974516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:16.010955   78080 cri.go:89] found id: ""
	I0729 18:29:16.010993   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.011000   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:16.011006   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:16.011062   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:16.046785   78080 cri.go:89] found id: ""
	I0729 18:29:16.046815   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.046825   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:16.046832   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:16.046887   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:16.082691   78080 cri.go:89] found id: ""
	I0729 18:29:16.082721   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.082731   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:16.082739   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:16.082796   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:16.127633   78080 cri.go:89] found id: ""
	I0729 18:29:16.127663   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.127676   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:16.127684   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:16.127741   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:16.162641   78080 cri.go:89] found id: ""
	I0729 18:29:16.162662   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.162670   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:16.162684   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:16.162695   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:16.215132   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:16.215162   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:16.229581   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:16.229607   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:16.303178   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:16.303198   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:16.303212   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:16.383739   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:16.383775   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:12.934751   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:14.934965   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:16.739047   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:18.739145   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:15.043163   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:17.544340   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:18.924292   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:18.937571   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:18.937626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:18.970523   78080 cri.go:89] found id: ""
	I0729 18:29:18.970554   78080 logs.go:276] 0 containers: []
	W0729 18:29:18.970563   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:18.970568   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:18.970624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:19.005448   78080 cri.go:89] found id: ""
	I0729 18:29:19.005471   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.005478   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:19.005483   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:19.005538   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:19.044352   78080 cri.go:89] found id: ""
	I0729 18:29:19.044377   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.044386   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:19.044393   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:19.044448   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:19.079288   78080 cri.go:89] found id: ""
	I0729 18:29:19.079317   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.079327   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:19.079333   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:19.079402   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:19.122932   78080 cri.go:89] found id: ""
	I0729 18:29:19.122954   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.122961   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:19.122967   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:19.123020   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:19.166992   78080 cri.go:89] found id: ""
	I0729 18:29:19.167018   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.167025   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:19.167031   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:19.167103   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:19.215301   78080 cri.go:89] found id: ""
	I0729 18:29:19.215331   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.215341   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:19.215355   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:19.215419   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:19.267635   78080 cri.go:89] found id: ""
	I0729 18:29:19.267657   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.267664   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:19.267671   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:19.267682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:19.319924   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:19.319962   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:19.333987   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:19.334010   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:19.406541   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:19.406558   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:19.406571   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:19.487388   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:19.487426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:22.027745   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:22.041145   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:22.041218   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:22.080000   78080 cri.go:89] found id: ""
	I0729 18:29:22.080022   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.080029   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:22.080034   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:22.080079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:22.116385   78080 cri.go:89] found id: ""
	I0729 18:29:22.116415   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.116425   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:22.116431   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:22.116492   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:22.150530   78080 cri.go:89] found id: ""
	I0729 18:29:22.150552   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.150559   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:22.150565   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:22.150621   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:22.188782   78080 cri.go:89] found id: ""
	I0729 18:29:22.188808   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.188817   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:22.188822   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:22.188873   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:17.434007   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:19.434864   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:21.935573   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:20.739852   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:23.239853   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:20.044010   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:22.542952   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:24.543614   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:22.227117   78080 cri.go:89] found id: ""
	I0729 18:29:22.227152   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.227162   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:22.227169   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:22.227234   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:22.263057   78080 cri.go:89] found id: ""
	I0729 18:29:22.263079   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.263086   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:22.263091   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:22.263145   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:22.297368   78080 cri.go:89] found id: ""
	I0729 18:29:22.297391   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.297399   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:22.297406   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:22.297466   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:22.334117   78080 cri.go:89] found id: ""
	I0729 18:29:22.334149   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.334159   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:22.334170   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:22.334184   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:22.349344   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:22.349369   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:22.415720   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:22.415743   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:22.415758   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:22.494937   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:22.494971   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:22.536352   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:22.536382   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:25.087795   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:25.103985   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:25.104050   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:25.158532   78080 cri.go:89] found id: ""
	I0729 18:29:25.158562   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.158572   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:25.158580   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:25.158641   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:25.216740   78080 cri.go:89] found id: ""
	I0729 18:29:25.216762   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.216769   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:25.216775   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:25.216827   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:25.254827   78080 cri.go:89] found id: ""
	I0729 18:29:25.254855   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.254865   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:25.254872   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:25.254934   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:25.289377   78080 cri.go:89] found id: ""
	I0729 18:29:25.289407   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.289417   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:25.289424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:25.289484   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:25.328111   78080 cri.go:89] found id: ""
	I0729 18:29:25.328144   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.328153   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:25.328161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:25.328224   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:25.364779   78080 cri.go:89] found id: ""
	I0729 18:29:25.364808   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.364815   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:25.364827   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:25.364874   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:25.402906   78080 cri.go:89] found id: ""
	I0729 18:29:25.402935   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.402942   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:25.402948   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:25.403007   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:25.438747   78080 cri.go:89] found id: ""
	I0729 18:29:25.438770   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.438778   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:25.438787   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:25.438803   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:25.452803   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:25.452829   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:25.527575   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:25.527593   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:25.527610   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:25.622437   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:25.622482   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:25.661451   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:25.661478   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:23.936249   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:26.434496   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:25.739358   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:27.739702   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:27.043125   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:29.542130   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:28.213898   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:28.230013   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:28.230071   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:28.265484   78080 cri.go:89] found id: ""
	I0729 18:29:28.265511   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.265521   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:28.265530   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:28.265594   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:28.306374   78080 cri.go:89] found id: ""
	I0729 18:29:28.306428   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.306441   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:28.306448   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:28.306501   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:28.340274   78080 cri.go:89] found id: ""
	I0729 18:29:28.340299   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.340309   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:28.340316   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:28.340379   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:28.373928   78080 cri.go:89] found id: ""
	I0729 18:29:28.373973   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.373982   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:28.373990   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:28.374052   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:28.407075   78080 cri.go:89] found id: ""
	I0729 18:29:28.407107   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.407120   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:28.407129   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:28.407215   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:28.444501   78080 cri.go:89] found id: ""
	I0729 18:29:28.444528   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.444536   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:28.444543   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:28.444614   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:28.487513   78080 cri.go:89] found id: ""
	I0729 18:29:28.487540   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.487548   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:28.487554   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:28.487611   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:28.521957   78080 cri.go:89] found id: ""
	I0729 18:29:28.521990   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.522000   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:28.522011   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:28.522027   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:28.536880   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:28.536918   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:28.609486   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:28.609513   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:28.609528   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:28.694086   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:28.694125   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:28.733930   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:28.733964   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:31.292260   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:31.305840   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:31.305899   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:31.342510   78080 cri.go:89] found id: ""
	I0729 18:29:31.342539   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.342550   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:31.342557   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:31.342613   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:31.375093   78080 cri.go:89] found id: ""
	I0729 18:29:31.375118   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.375128   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:31.375135   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:31.375198   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:31.408554   78080 cri.go:89] found id: ""
	I0729 18:29:31.408576   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.408583   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:31.408588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:31.408660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:31.448748   78080 cri.go:89] found id: ""
	I0729 18:29:31.448774   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.448783   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:31.448796   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:31.448855   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:31.483541   78080 cri.go:89] found id: ""
	I0729 18:29:31.483564   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.483572   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:31.483578   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:31.483637   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:31.518173   78080 cri.go:89] found id: ""
	I0729 18:29:31.518198   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.518209   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:31.518217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:31.518279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:31.553345   78080 cri.go:89] found id: ""
	I0729 18:29:31.553371   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.553379   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:31.553384   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:31.553439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:31.591857   78080 cri.go:89] found id: ""
	I0729 18:29:31.591887   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.591905   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:31.591916   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:31.591929   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:31.648404   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:31.648436   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:31.661455   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:31.661477   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:31.732978   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:31.732997   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:31.733009   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:31.812105   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:31.812145   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:28.435517   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:30.436822   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:30.239755   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:32.739231   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.739534   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:31.542847   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:33.543096   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.353079   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:34.366759   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:34.366817   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:34.400944   78080 cri.go:89] found id: ""
	I0729 18:29:34.400974   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.400984   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:34.400991   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:34.401055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:34.439348   78080 cri.go:89] found id: ""
	I0729 18:29:34.439373   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.439383   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:34.439395   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:34.439444   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:34.473969   78080 cri.go:89] found id: ""
	I0729 18:29:34.473991   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.474010   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:34.474017   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:34.474080   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:34.507741   78080 cri.go:89] found id: ""
	I0729 18:29:34.507770   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.507778   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:34.507784   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:34.507845   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:34.543794   78080 cri.go:89] found id: ""
	I0729 18:29:34.543815   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.543823   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:34.543830   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:34.543895   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:34.577893   78080 cri.go:89] found id: ""
	I0729 18:29:34.577918   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.577926   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:34.577931   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:34.577978   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:34.612703   78080 cri.go:89] found id: ""
	I0729 18:29:34.612735   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.612745   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:34.612752   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:34.612815   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:34.648167   78080 cri.go:89] found id: ""
	I0729 18:29:34.648197   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.648209   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:34.648219   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:34.648233   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:34.689821   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:34.689848   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:34.743902   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:34.743935   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:34.757400   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:34.757426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:34.833684   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:34.833706   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:34.833721   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:32.934207   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.936549   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:37.238618   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:39.239761   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:36.042461   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:38.543304   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:37.419270   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:37.433249   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:37.433301   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:37.469991   78080 cri.go:89] found id: ""
	I0729 18:29:37.470021   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.470031   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:37.470038   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:37.470098   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:37.504511   78080 cri.go:89] found id: ""
	I0729 18:29:37.504537   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.504548   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:37.504554   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:37.504612   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:37.545304   78080 cri.go:89] found id: ""
	I0729 18:29:37.545332   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.545342   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:37.545349   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:37.545406   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:37.584255   78080 cri.go:89] found id: ""
	I0729 18:29:37.584280   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.584287   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:37.584292   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:37.584345   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:37.620917   78080 cri.go:89] found id: ""
	I0729 18:29:37.620943   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.620951   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:37.620958   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:37.621022   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:37.659381   78080 cri.go:89] found id: ""
	I0729 18:29:37.659405   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.659414   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:37.659419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:37.659486   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:37.701337   78080 cri.go:89] found id: ""
	I0729 18:29:37.701360   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.701368   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:37.701373   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:37.701426   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:37.737142   78080 cri.go:89] found id: ""
	I0729 18:29:37.737168   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.737177   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:37.737186   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:37.737201   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:37.789951   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:37.789992   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:37.804759   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:37.804784   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:37.881777   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:37.881794   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:37.881808   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:37.970593   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:37.970625   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:40.511557   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:40.525472   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:40.525527   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:40.564227   78080 cri.go:89] found id: ""
	I0729 18:29:40.564253   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.564263   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:40.564270   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:40.564336   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:40.600384   78080 cri.go:89] found id: ""
	I0729 18:29:40.600409   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.600417   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:40.600423   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:40.600475   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:40.634819   78080 cri.go:89] found id: ""
	I0729 18:29:40.634843   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.634858   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:40.634866   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:40.634913   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:40.669963   78080 cri.go:89] found id: ""
	I0729 18:29:40.669991   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.669999   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:40.670006   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:40.670069   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:40.705680   78080 cri.go:89] found id: ""
	I0729 18:29:40.705705   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.705714   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:40.705719   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:40.705775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:40.743691   78080 cri.go:89] found id: ""
	I0729 18:29:40.743715   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.743725   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:40.743732   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:40.743820   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:40.783858   78080 cri.go:89] found id: ""
	I0729 18:29:40.783889   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.783898   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:40.783903   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:40.783953   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:40.821499   78080 cri.go:89] found id: ""
	I0729 18:29:40.821527   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.821537   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:40.821547   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:40.821562   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:40.874941   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:40.874972   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:40.888034   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:40.888057   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:40.960013   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:40.960032   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:40.960044   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:41.043013   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:41.043042   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:37.435119   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:39.435967   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:41.934232   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:41.739070   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.739497   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:40.543453   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.042528   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.583555   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:43.597120   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:43.597193   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:43.631500   78080 cri.go:89] found id: ""
	I0729 18:29:43.631526   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.631535   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:43.631542   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:43.631607   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:43.667003   78080 cri.go:89] found id: ""
	I0729 18:29:43.667029   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.667037   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:43.667042   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:43.667102   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:43.701471   78080 cri.go:89] found id: ""
	I0729 18:29:43.701502   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.701510   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:43.701515   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:43.701569   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:43.740037   78080 cri.go:89] found id: ""
	I0729 18:29:43.740058   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.740067   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:43.740074   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:43.740145   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:43.772584   78080 cri.go:89] found id: ""
	I0729 18:29:43.772610   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.772620   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:43.772626   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:43.772689   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:43.806340   78080 cri.go:89] found id: ""
	I0729 18:29:43.806382   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.806393   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:43.806401   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:43.806480   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:43.840085   78080 cri.go:89] found id: ""
	I0729 18:29:43.840109   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.840118   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:43.840133   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:43.840198   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:43.873412   78080 cri.go:89] found id: ""
	I0729 18:29:43.873438   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.873448   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:43.873458   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:43.873473   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:43.928762   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:43.928790   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:43.944129   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:43.944156   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:44.017330   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:44.017349   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:44.017361   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:44.106858   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:44.106915   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:46.651050   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:46.665253   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:46.665310   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:46.698846   78080 cri.go:89] found id: ""
	I0729 18:29:46.698871   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.698881   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:46.698888   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:46.698956   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:46.734354   78080 cri.go:89] found id: ""
	I0729 18:29:46.734395   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.734405   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:46.734413   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:46.734468   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:46.771978   78080 cri.go:89] found id: ""
	I0729 18:29:46.771999   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.772007   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:46.772012   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:46.772059   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:46.807231   78080 cri.go:89] found id: ""
	I0729 18:29:46.807255   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.807263   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:46.807272   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:46.807329   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:46.842257   78080 cri.go:89] found id: ""
	I0729 18:29:46.842278   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.842306   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:46.842312   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:46.842373   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:46.876287   78080 cri.go:89] found id: ""
	I0729 18:29:46.876309   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.876317   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:46.876323   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:46.876389   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:46.909695   78080 cri.go:89] found id: ""
	I0729 18:29:46.909719   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.909726   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:46.909731   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:46.909806   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:46.951768   78080 cri.go:89] found id: ""
	I0729 18:29:46.951798   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.951807   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:46.951815   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:46.951825   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:47.025467   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:47.025485   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:47.025497   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:47.106336   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:47.106391   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:47.145652   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:47.145682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:47.200857   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:47.200886   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:43.935210   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:46.434346   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:45.739606   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:48.240282   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:45.544442   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:48.042872   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:49.715401   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:49.729703   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:49.729776   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:49.770016   78080 cri.go:89] found id: ""
	I0729 18:29:49.770039   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.770062   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:49.770070   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:49.770127   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:49.805464   78080 cri.go:89] found id: ""
	I0729 18:29:49.805487   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.805495   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:49.805500   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:49.805560   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:49.838739   78080 cri.go:89] found id: ""
	I0729 18:29:49.838770   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.838782   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:49.838789   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:49.838861   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:49.881168   78080 cri.go:89] found id: ""
	I0729 18:29:49.881194   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.881202   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:49.881208   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:49.881269   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:49.919978   78080 cri.go:89] found id: ""
	I0729 18:29:49.919999   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.920006   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:49.920012   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:49.920079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:49.958971   78080 cri.go:89] found id: ""
	I0729 18:29:49.958996   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.959006   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:49.959013   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:49.959063   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:50.001253   78080 cri.go:89] found id: ""
	I0729 18:29:50.001281   78080 logs.go:276] 0 containers: []
	W0729 18:29:50.001291   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:50.001298   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:50.001362   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:50.038729   78080 cri.go:89] found id: ""
	I0729 18:29:50.038755   78080 logs.go:276] 0 containers: []
	W0729 18:29:50.038766   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:50.038776   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:50.038789   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:50.082540   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:50.082567   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:50.132372   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:50.132413   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:50.146806   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:50.146835   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:50.214495   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:50.214515   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:50.214532   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:48.435540   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.935475   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.240626   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.739158   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.044073   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.047924   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:54.542657   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.793987   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:52.808085   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:52.808149   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:52.844869   78080 cri.go:89] found id: ""
	I0729 18:29:52.844904   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.844917   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:52.844925   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:52.844986   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:52.878097   78080 cri.go:89] found id: ""
	I0729 18:29:52.878122   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.878135   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:52.878142   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:52.878191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:52.910843   78080 cri.go:89] found id: ""
	I0729 18:29:52.910884   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.910894   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:52.910902   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:52.910953   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:52.943233   78080 cri.go:89] found id: ""
	I0729 18:29:52.943257   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.943267   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:52.943274   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:52.943335   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:52.978354   78080 cri.go:89] found id: ""
	I0729 18:29:52.978402   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.978413   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:52.978423   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:52.978503   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:53.011238   78080 cri.go:89] found id: ""
	I0729 18:29:53.011266   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.011276   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:53.011283   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:53.011336   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:53.048787   78080 cri.go:89] found id: ""
	I0729 18:29:53.048817   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.048827   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:53.048834   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:53.048900   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:53.086108   78080 cri.go:89] found id: ""
	I0729 18:29:53.086135   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.086156   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:53.086176   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:53.086195   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:53.137552   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:53.137580   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:53.151308   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:53.151333   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:53.225968   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:53.225992   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:53.226004   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:53.308111   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:53.308145   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:55.850207   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:55.864003   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:55.864054   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:55.898109   78080 cri.go:89] found id: ""
	I0729 18:29:55.898134   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.898142   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:55.898148   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:55.898201   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:55.931616   78080 cri.go:89] found id: ""
	I0729 18:29:55.931643   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.931653   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:55.931660   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:55.931719   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:55.969034   78080 cri.go:89] found id: ""
	I0729 18:29:55.969063   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.969073   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:55.969080   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:55.969142   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:56.007552   78080 cri.go:89] found id: ""
	I0729 18:29:56.007576   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.007586   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:56.007592   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:56.007653   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:56.044342   78080 cri.go:89] found id: ""
	I0729 18:29:56.044367   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.044376   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:56.044382   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:56.044437   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:56.078352   78080 cri.go:89] found id: ""
	I0729 18:29:56.078396   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.078412   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:56.078420   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:56.078471   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:56.116505   78080 cri.go:89] found id: ""
	I0729 18:29:56.116532   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.116543   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:56.116551   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:56.116611   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:56.151493   78080 cri.go:89] found id: ""
	I0729 18:29:56.151516   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.151523   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:56.151530   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:56.151542   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:56.206170   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:56.206198   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:56.219658   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:56.219684   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:56.290279   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:56.290300   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:56.290312   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:56.371352   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:56.371382   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:53.434046   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:55.435343   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:55.239055   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:57.241032   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.740003   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:57.041745   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.042416   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:58.908793   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:58.922566   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:58.922626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:58.959375   78080 cri.go:89] found id: ""
	I0729 18:29:58.959397   78080 logs.go:276] 0 containers: []
	W0729 18:29:58.959404   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:58.959410   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:58.959459   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:58.993235   78080 cri.go:89] found id: ""
	I0729 18:29:58.993257   78080 logs.go:276] 0 containers: []
	W0729 18:29:58.993265   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:58.993271   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:58.993331   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:59.028186   78080 cri.go:89] found id: ""
	I0729 18:29:59.028212   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.028220   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:59.028225   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:59.028271   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:59.063589   78080 cri.go:89] found id: ""
	I0729 18:29:59.063619   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.063628   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:59.063635   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:59.063695   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:59.101116   78080 cri.go:89] found id: ""
	I0729 18:29:59.101142   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.101152   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:59.101158   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:59.101208   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:59.135288   78080 cri.go:89] found id: ""
	I0729 18:29:59.135314   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.135324   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:59.135332   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:59.135395   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:59.170520   78080 cri.go:89] found id: ""
	I0729 18:29:59.170549   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.170557   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:59.170562   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:59.170618   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:59.229796   78080 cri.go:89] found id: ""
	I0729 18:29:59.229825   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.229835   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:59.229843   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:59.229871   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:59.244654   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:59.244682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:59.321262   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:59.321286   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:59.321301   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:59.401423   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:59.401459   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:59.442916   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:59.442938   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:01.995116   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:02.008454   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:02.008516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:02.046412   78080 cri.go:89] found id: ""
	I0729 18:30:02.046431   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.046438   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:02.046443   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:02.046487   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:02.082444   78080 cri.go:89] found id: ""
	I0729 18:30:02.082466   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.082476   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:02.082482   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:02.082551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:02.116013   78080 cri.go:89] found id: ""
	I0729 18:30:02.116041   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.116052   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:02.116058   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:02.116127   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:02.155817   78080 cri.go:89] found id: ""
	I0729 18:30:02.155844   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.155854   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:02.155862   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:02.155914   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:02.195518   78080 cri.go:89] found id: ""
	I0729 18:30:02.195548   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.195556   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:02.195563   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:02.195624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:57.934058   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.934547   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.935238   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.742050   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:04.239758   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.043550   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:03.542544   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:02.228248   78080 cri.go:89] found id: ""
	I0729 18:30:02.228274   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.228283   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:02.228289   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:02.228370   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:02.262441   78080 cri.go:89] found id: ""
	I0729 18:30:02.262469   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.262479   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:02.262486   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:02.262546   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:02.296900   78080 cri.go:89] found id: ""
	I0729 18:30:02.296930   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.296937   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:02.296953   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:02.296965   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:02.352356   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:02.352389   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:02.366336   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:02.366365   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:02.441367   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:02.441389   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:02.441403   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:02.524134   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:02.524173   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:05.071581   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:05.085481   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:05.085535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:05.121610   78080 cri.go:89] found id: ""
	I0729 18:30:05.121636   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.121644   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:05.121652   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:05.121716   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:05.157382   78080 cri.go:89] found id: ""
	I0729 18:30:05.157406   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.157413   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:05.157418   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:05.157478   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:05.195552   78080 cri.go:89] found id: ""
	I0729 18:30:05.195582   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.195593   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:05.195600   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:05.195657   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:05.231071   78080 cri.go:89] found id: ""
	I0729 18:30:05.231095   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.231103   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:05.231108   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:05.231165   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:05.267445   78080 cri.go:89] found id: ""
	I0729 18:30:05.267474   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.267485   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:05.267493   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:05.267555   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:05.304258   78080 cri.go:89] found id: ""
	I0729 18:30:05.304279   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.304286   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:05.304291   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:05.304338   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:05.339155   78080 cri.go:89] found id: ""
	I0729 18:30:05.339176   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.339184   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:05.339190   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:05.339243   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:05.375291   78080 cri.go:89] found id: ""
	I0729 18:30:05.375328   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.375337   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:05.375346   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:05.375361   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:05.446196   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:05.446221   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:05.446236   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:05.529421   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:05.529457   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:05.570234   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:05.570269   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:05.629349   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:05.629391   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:04.434625   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:06.934246   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:06.239886   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.242421   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:05.543394   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.042242   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.151320   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:08.165983   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:08.166045   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:08.205703   78080 cri.go:89] found id: ""
	I0729 18:30:08.205726   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.205733   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:08.205738   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:08.205786   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:08.245919   78080 cri.go:89] found id: ""
	I0729 18:30:08.245946   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.245957   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:08.245964   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:08.246024   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:08.286595   78080 cri.go:89] found id: ""
	I0729 18:30:08.286621   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.286631   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:08.286638   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:08.286700   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:08.330032   78080 cri.go:89] found id: ""
	I0729 18:30:08.330060   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.330070   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:08.330077   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:08.330140   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:08.362535   78080 cri.go:89] found id: ""
	I0729 18:30:08.362567   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.362578   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:08.362586   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:08.362645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:08.397648   78080 cri.go:89] found id: ""
	I0729 18:30:08.397678   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.397688   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:08.397704   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:08.397766   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:08.433615   78080 cri.go:89] found id: ""
	I0729 18:30:08.433693   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.433716   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:08.433734   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:08.433809   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:08.465765   78080 cri.go:89] found id: ""
	I0729 18:30:08.465792   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.465803   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:08.465814   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:08.465829   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:08.536332   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:08.536360   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:08.536375   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:08.613737   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:08.613776   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:08.659707   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:08.659736   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:08.712702   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:08.712736   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:11.226660   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:11.240852   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:11.240919   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:11.277632   78080 cri.go:89] found id: ""
	I0729 18:30:11.277664   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.277675   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:11.277682   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:11.277751   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:11.312458   78080 cri.go:89] found id: ""
	I0729 18:30:11.312478   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.312485   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:11.312491   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:11.312551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:11.350375   78080 cri.go:89] found id: ""
	I0729 18:30:11.350406   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.350416   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:11.350424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:11.350486   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:11.389280   78080 cri.go:89] found id: ""
	I0729 18:30:11.389307   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.389317   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:11.389324   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:11.389382   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:11.424907   78080 cri.go:89] found id: ""
	I0729 18:30:11.424936   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.424944   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:11.424949   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:11.425009   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:11.480686   78080 cri.go:89] found id: ""
	I0729 18:30:11.480713   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.480720   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:11.480726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:11.480778   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:11.514831   78080 cri.go:89] found id: ""
	I0729 18:30:11.514857   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.514864   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:11.514870   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:11.514917   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:11.547930   78080 cri.go:89] found id: ""
	I0729 18:30:11.547955   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.547964   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:11.547974   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:11.547989   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:11.586068   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:11.586098   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:11.646857   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:11.646892   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:11.663549   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:11.663576   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:11.731362   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:11.731383   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:11.731397   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:08.934638   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:11.434765   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:10.738608   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:12.740637   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:10.042514   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:12.042731   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.042952   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.315531   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:14.330485   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:14.330544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:14.363403   78080 cri.go:89] found id: ""
	I0729 18:30:14.363433   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.363444   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:14.363451   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:14.363516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:14.401204   78080 cri.go:89] found id: ""
	I0729 18:30:14.401227   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.401234   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:14.401240   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:14.401301   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:14.436737   78080 cri.go:89] found id: ""
	I0729 18:30:14.436765   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.436775   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:14.436782   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:14.436844   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:14.471376   78080 cri.go:89] found id: ""
	I0729 18:30:14.471403   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.471411   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:14.471419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:14.471478   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:14.506883   78080 cri.go:89] found id: ""
	I0729 18:30:14.506914   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.506925   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:14.506932   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:14.506990   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:14.546444   78080 cri.go:89] found id: ""
	I0729 18:30:14.546469   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.546479   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:14.546486   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:14.546552   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:14.580282   78080 cri.go:89] found id: ""
	I0729 18:30:14.580313   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.580320   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:14.580326   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:14.580387   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:14.614185   78080 cri.go:89] found id: ""
	I0729 18:30:14.614210   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.614220   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:14.614231   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:14.614246   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:14.652588   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:14.652610   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:14.706056   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:14.706090   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:14.719332   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:14.719356   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:14.792087   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:14.792115   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:14.792136   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:13.934967   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:16.435238   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.740676   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:17.239466   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:19.239656   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:16.541564   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:18.547053   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:17.375639   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:17.389473   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:17.389535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:17.424485   78080 cri.go:89] found id: ""
	I0729 18:30:17.424513   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.424521   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:17.424527   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:17.424572   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:17.461100   78080 cri.go:89] found id: ""
	I0729 18:30:17.461129   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.461136   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:17.461141   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:17.461191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:17.494866   78080 cri.go:89] found id: ""
	I0729 18:30:17.494894   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.494902   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:17.494907   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:17.494983   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:17.529897   78080 cri.go:89] found id: ""
	I0729 18:30:17.529924   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.529934   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:17.529940   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:17.530002   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:17.569870   78080 cri.go:89] found id: ""
	I0729 18:30:17.569897   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.569905   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:17.569910   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:17.569958   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:17.605324   78080 cri.go:89] found id: ""
	I0729 18:30:17.605364   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.605384   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:17.605392   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:17.605457   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:17.640552   78080 cri.go:89] found id: ""
	I0729 18:30:17.640583   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.640595   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:17.640602   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:17.640668   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:17.679769   78080 cri.go:89] found id: ""
	I0729 18:30:17.679800   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.679808   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:17.679827   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:17.679843   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:17.757782   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:17.757814   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:17.803850   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:17.803878   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:17.857987   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:17.858017   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:17.871062   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:17.871086   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:17.940456   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:20.441171   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:20.454752   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:20.454824   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:20.490744   78080 cri.go:89] found id: ""
	I0729 18:30:20.490773   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.490783   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:20.490791   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:20.490853   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:20.524406   78080 cri.go:89] found id: ""
	I0729 18:30:20.524437   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.524448   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:20.524463   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:20.524515   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:20.559225   78080 cri.go:89] found id: ""
	I0729 18:30:20.559257   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.559268   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:20.559275   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:20.559337   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:20.595297   78080 cri.go:89] found id: ""
	I0729 18:30:20.595324   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.595355   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:20.595364   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:20.595436   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:20.632176   78080 cri.go:89] found id: ""
	I0729 18:30:20.632204   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.632215   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:20.632222   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:20.632282   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:20.676600   78080 cri.go:89] found id: ""
	I0729 18:30:20.676625   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.676632   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:20.676638   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:20.676734   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:20.717920   78080 cri.go:89] found id: ""
	I0729 18:30:20.717945   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.717955   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:20.717966   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:20.718021   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:20.756217   78080 cri.go:89] found id: ""
	I0729 18:30:20.756243   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.756253   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:20.756262   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:20.756277   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:20.837150   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:20.837189   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:20.876023   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:20.876050   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:20.932402   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:20.932429   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:20.947422   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:20.947454   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:21.022698   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:18.934790   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.434992   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.242999   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.739073   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.042689   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.042794   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.523141   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:23.538019   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:23.538098   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:23.576953   78080 cri.go:89] found id: ""
	I0729 18:30:23.576979   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.576991   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:23.576998   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:23.577060   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:23.613052   78080 cri.go:89] found id: ""
	I0729 18:30:23.613083   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.613094   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:23.613100   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:23.613170   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:23.648694   78080 cri.go:89] found id: ""
	I0729 18:30:23.648717   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.648725   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:23.648730   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:23.648775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:23.680939   78080 cri.go:89] found id: ""
	I0729 18:30:23.680965   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.680972   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:23.680977   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:23.681032   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:23.716529   78080 cri.go:89] found id: ""
	I0729 18:30:23.716556   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.716564   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:23.716569   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:23.716628   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:23.756833   78080 cri.go:89] found id: ""
	I0729 18:30:23.756860   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.756868   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:23.756873   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:23.756918   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:23.796436   78080 cri.go:89] found id: ""
	I0729 18:30:23.796460   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.796467   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:23.796472   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:23.796519   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:23.839877   78080 cri.go:89] found id: ""
	I0729 18:30:23.839906   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.839914   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:23.839922   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:23.839934   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:23.879423   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:23.879447   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:23.928379   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:23.928408   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:23.942639   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:23.942669   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:24.014068   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:24.014095   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:24.014110   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:26.597923   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:26.610877   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:26.610945   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:26.647550   78080 cri.go:89] found id: ""
	I0729 18:30:26.647579   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.647590   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:26.647598   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:26.647655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:26.681552   78080 cri.go:89] found id: ""
	I0729 18:30:26.681581   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.681589   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:26.681595   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:26.681660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:26.714475   78080 cri.go:89] found id: ""
	I0729 18:30:26.714503   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.714513   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:26.714519   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:26.714588   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:26.748671   78080 cri.go:89] found id: ""
	I0729 18:30:26.748697   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.748707   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:26.748714   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:26.748775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:26.781380   78080 cri.go:89] found id: ""
	I0729 18:30:26.781406   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.781421   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:26.781429   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:26.781483   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:26.815201   78080 cri.go:89] found id: ""
	I0729 18:30:26.815230   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.815243   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:26.815251   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:26.815318   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:26.848600   78080 cri.go:89] found id: ""
	I0729 18:30:26.848628   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.848637   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:26.848644   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:26.848724   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:26.883828   78080 cri.go:89] found id: ""
	I0729 18:30:26.883872   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.883883   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:26.883893   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:26.883908   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:26.936955   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:26.936987   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:26.952212   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:26.952238   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:27.019389   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:27.019413   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:27.019426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:27.095654   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:27.095682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:23.935397   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:26.435231   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:26.238749   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:28.239699   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:25.044320   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:27.542022   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:29.542274   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:29.637269   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:29.652138   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:29.652211   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:29.691063   78080 cri.go:89] found id: ""
	I0729 18:30:29.691094   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.691104   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:29.691111   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:29.691173   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:29.725188   78080 cri.go:89] found id: ""
	I0729 18:30:29.725224   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.725232   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:29.725240   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:29.725308   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:29.764118   78080 cri.go:89] found id: ""
	I0729 18:30:29.764149   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.764159   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:29.764167   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:29.764232   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:29.797884   78080 cri.go:89] found id: ""
	I0729 18:30:29.797909   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.797919   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:29.797927   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:29.797989   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:29.838784   78080 cri.go:89] found id: ""
	I0729 18:30:29.838808   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.838815   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:29.838821   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:29.838885   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:29.872394   78080 cri.go:89] found id: ""
	I0729 18:30:29.872420   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.872427   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:29.872433   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:29.872491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:29.908966   78080 cri.go:89] found id: ""
	I0729 18:30:29.908995   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.909012   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:29.909020   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:29.909081   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:29.946322   78080 cri.go:89] found id: ""
	I0729 18:30:29.946344   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.946352   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:29.946371   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:29.946386   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:30.019133   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:30.019166   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:30.019179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:30.096499   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:30.096532   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:30.136487   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:30.136519   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:30.187341   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:30.187374   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:28.435472   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:30.934817   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:30.739101   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.742029   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.042850   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:34.042919   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.703546   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:32.716981   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:32.717042   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:32.753275   78080 cri.go:89] found id: ""
	I0729 18:30:32.753307   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.753318   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:32.753326   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:32.753393   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:32.789075   78080 cri.go:89] found id: ""
	I0729 18:30:32.789105   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.789116   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:32.789123   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:32.789185   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:32.822945   78080 cri.go:89] found id: ""
	I0729 18:30:32.822971   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.822979   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:32.822984   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:32.823033   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:32.856523   78080 cri.go:89] found id: ""
	I0729 18:30:32.856577   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.856589   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:32.856597   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:32.856661   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:32.895768   78080 cri.go:89] found id: ""
	I0729 18:30:32.895798   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.895810   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:32.895817   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:32.895876   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:32.934990   78080 cri.go:89] found id: ""
	I0729 18:30:32.935030   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.935042   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:32.935054   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:32.935132   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:32.970924   78080 cri.go:89] found id: ""
	I0729 18:30:32.970949   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.970957   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:32.970964   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:32.971022   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:33.004133   78080 cri.go:89] found id: ""
	I0729 18:30:33.004164   78080 logs.go:276] 0 containers: []
	W0729 18:30:33.004173   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:33.004182   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:33.004202   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:33.043432   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:33.043467   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:33.095517   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:33.095554   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:33.108859   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:33.108889   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:33.180661   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:33.180681   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:33.180696   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:35.763324   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:35.777060   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:35.777138   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:35.812601   78080 cri.go:89] found id: ""
	I0729 18:30:35.812636   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.812647   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:35.812654   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:35.812719   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:35.848116   78080 cri.go:89] found id: ""
	I0729 18:30:35.848161   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.848172   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:35.848179   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:35.848240   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:35.895786   78080 cri.go:89] found id: ""
	I0729 18:30:35.895817   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.895829   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:35.895837   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:35.895911   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:35.936753   78080 cri.go:89] found id: ""
	I0729 18:30:35.936780   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.936787   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:35.936794   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:35.936848   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:35.971321   78080 cri.go:89] found id: ""
	I0729 18:30:35.971349   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.971358   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:35.971371   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:35.971434   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:36.018702   78080 cri.go:89] found id: ""
	I0729 18:30:36.018725   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.018732   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:36.018737   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:36.018792   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:36.054829   78080 cri.go:89] found id: ""
	I0729 18:30:36.054865   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.054875   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:36.054882   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:36.054948   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:36.087456   78080 cri.go:89] found id: ""
	I0729 18:30:36.087483   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.087492   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:36.087500   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:36.087512   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:36.140919   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:36.140951   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:36.155581   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:36.155614   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:36.227617   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:36.227642   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:36.227669   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:36.304610   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:36.304651   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:32.935270   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:34.935362   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:35.239258   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:37.242161   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:39.739031   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:36.043489   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:38.542041   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:38.843099   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:38.857571   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:38.857626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:38.890760   78080 cri.go:89] found id: ""
	I0729 18:30:38.890790   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.890801   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:38.890809   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:38.890884   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:38.932701   78080 cri.go:89] found id: ""
	I0729 18:30:38.932738   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.932748   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:38.932755   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:38.932812   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:38.967379   78080 cri.go:89] found id: ""
	I0729 18:30:38.967406   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.967416   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:38.967430   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:38.967490   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:39.000419   78080 cri.go:89] found id: ""
	I0729 18:30:39.000450   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.000459   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:39.000466   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:39.000528   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:39.033764   78080 cri.go:89] found id: ""
	I0729 18:30:39.033793   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.033802   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:39.033807   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:39.033857   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:39.070904   78080 cri.go:89] found id: ""
	I0729 18:30:39.070933   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.070944   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:39.070951   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:39.071010   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:39.107444   78080 cri.go:89] found id: ""
	I0729 18:30:39.107471   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.107480   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:39.107488   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:39.107549   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:39.141392   78080 cri.go:89] found id: ""
	I0729 18:30:39.141423   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.141436   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:39.141449   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:39.141464   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:39.154874   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:39.154905   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:39.229370   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:39.229396   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:39.229413   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:39.310508   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:39.310538   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:39.352547   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:39.352569   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:41.908463   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:41.922132   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:41.922209   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:41.960404   78080 cri.go:89] found id: ""
	I0729 18:30:41.960431   78080 logs.go:276] 0 containers: []
	W0729 18:30:41.960439   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:41.960444   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:41.960498   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:41.994082   78080 cri.go:89] found id: ""
	I0729 18:30:41.994110   78080 logs.go:276] 0 containers: []
	W0729 18:30:41.994117   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:41.994123   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:41.994177   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:42.030301   78080 cri.go:89] found id: ""
	I0729 18:30:42.030322   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.030330   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:42.030336   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:42.030401   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:42.064310   78080 cri.go:89] found id: ""
	I0729 18:30:42.064339   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.064349   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:42.064356   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:42.064413   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:42.097705   78080 cri.go:89] found id: ""
	I0729 18:30:42.097738   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.097748   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:42.097761   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:42.097819   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:42.133254   78080 cri.go:89] found id: ""
	I0729 18:30:42.133282   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.133292   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:42.133299   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:42.133361   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:42.170028   78080 cri.go:89] found id: ""
	I0729 18:30:42.170054   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.170063   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:42.170075   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:42.170141   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:42.205680   78080 cri.go:89] found id: ""
	I0729 18:30:42.205712   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.205723   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:42.205736   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:42.205749   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:37.442211   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:39.934866   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:41.935293   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:42.240035   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:41.041897   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:43.042300   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:42.246322   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:42.246350   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:42.300852   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:42.300884   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:42.316306   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:42.316333   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:42.389898   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:42.389920   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:42.389934   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:44.971238   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:44.984796   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:44.984846   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:45.021842   78080 cri.go:89] found id: ""
	I0729 18:30:45.021868   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.021877   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:45.021885   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:45.021958   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:45.059353   78080 cri.go:89] found id: ""
	I0729 18:30:45.059377   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.059387   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:45.059394   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:45.059456   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:45.094867   78080 cri.go:89] found id: ""
	I0729 18:30:45.094900   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.094911   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:45.094918   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:45.094974   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:45.128589   78080 cri.go:89] found id: ""
	I0729 18:30:45.128614   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.128622   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:45.128628   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:45.128671   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:45.160137   78080 cri.go:89] found id: ""
	I0729 18:30:45.160165   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.160172   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:45.160177   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:45.160228   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:45.205757   78080 cri.go:89] found id: ""
	I0729 18:30:45.205780   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.205787   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:45.205793   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:45.205840   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:45.250056   78080 cri.go:89] found id: ""
	I0729 18:30:45.250084   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.250091   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:45.250096   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:45.250179   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:45.285349   78080 cri.go:89] found id: ""
	I0729 18:30:45.285372   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.285380   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:45.285389   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:45.285401   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:45.364188   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:45.364218   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:45.412638   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:45.412660   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:45.467713   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:45.467745   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:45.483811   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:45.483835   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:45.564866   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:44.434921   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:46.934237   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:44.740648   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:47.239253   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:49.240229   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:45.043415   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:47.542757   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:49.543251   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:48.065579   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:48.079441   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:48.079511   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:48.115540   78080 cri.go:89] found id: ""
	I0729 18:30:48.115569   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.115578   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:48.115586   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:48.115670   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:48.151810   78080 cri.go:89] found id: ""
	I0729 18:30:48.151834   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.151841   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:48.151847   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:48.151913   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:48.187459   78080 cri.go:89] found id: ""
	I0729 18:30:48.187490   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.187500   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:48.187508   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:48.187568   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:48.226804   78080 cri.go:89] found id: ""
	I0729 18:30:48.226835   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.226846   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:48.226853   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:48.226916   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:48.260413   78080 cri.go:89] found id: ""
	I0729 18:30:48.260439   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.260448   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:48.260455   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:48.260517   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:48.296719   78080 cri.go:89] found id: ""
	I0729 18:30:48.296743   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.296751   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:48.296756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:48.296806   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:48.331969   78080 cri.go:89] found id: ""
	I0729 18:30:48.331995   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.332002   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:48.332008   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:48.332055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:48.370593   78080 cri.go:89] found id: ""
	I0729 18:30:48.370618   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.370626   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:48.370634   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:48.370645   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:48.410653   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:48.410679   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:48.465467   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:48.465503   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:48.480025   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:48.480053   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:48.557806   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:48.557824   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:48.557840   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:51.140743   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:51.153970   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:51.154046   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:51.187826   78080 cri.go:89] found id: ""
	I0729 18:30:51.187851   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.187862   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:51.187868   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:51.187922   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:51.226140   78080 cri.go:89] found id: ""
	I0729 18:30:51.226172   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.226182   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:51.226189   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:51.226255   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:51.262321   78080 cri.go:89] found id: ""
	I0729 18:30:51.262349   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.262357   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:51.262378   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:51.262440   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:51.295356   78080 cri.go:89] found id: ""
	I0729 18:30:51.295383   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.295395   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:51.295403   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:51.295467   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:51.328320   78080 cri.go:89] found id: ""
	I0729 18:30:51.328349   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.328361   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:51.328367   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:51.328424   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:51.364202   78080 cri.go:89] found id: ""
	I0729 18:30:51.364233   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.364242   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:51.364249   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:51.364313   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:51.405500   78080 cri.go:89] found id: ""
	I0729 18:30:51.405529   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.405538   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:51.405544   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:51.405606   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:51.443519   78080 cri.go:89] found id: ""
	I0729 18:30:51.443541   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.443548   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:51.443556   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:51.443567   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:51.495560   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:51.495599   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:51.512152   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:51.512178   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:51.590972   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:51.590992   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:51.591021   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:51.688717   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:51.688757   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:48.934577   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:51.437173   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:51.739680   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.238626   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:52.044254   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.545288   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.256011   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:54.270602   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:54.270653   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:54.311547   78080 cri.go:89] found id: ""
	I0729 18:30:54.311574   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.311584   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:54.311592   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:54.311655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:54.347559   78080 cri.go:89] found id: ""
	I0729 18:30:54.347591   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.347602   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:54.347610   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:54.347675   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:54.382180   78080 cri.go:89] found id: ""
	I0729 18:30:54.382205   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.382212   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:54.382217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:54.382264   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:54.415560   78080 cri.go:89] found id: ""
	I0729 18:30:54.415587   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.415594   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:54.415600   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:54.415655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:54.450313   78080 cri.go:89] found id: ""
	I0729 18:30:54.450341   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.450351   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:54.450372   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:54.450439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:54.484649   78080 cri.go:89] found id: ""
	I0729 18:30:54.484678   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.484687   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:54.484694   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:54.484741   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:54.520170   78080 cri.go:89] found id: ""
	I0729 18:30:54.520204   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.520212   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:54.520220   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:54.520270   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:54.562724   78080 cri.go:89] found id: ""
	I0729 18:30:54.562753   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.562762   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:54.562772   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:54.562788   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:54.617461   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:54.617498   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:54.630970   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:54.630993   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:54.699332   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:54.699353   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:54.699366   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:54.779240   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:54.779276   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:53.934151   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:56.434549   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:56.239554   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:58.239583   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:57.041845   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:59.042164   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:57.318673   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:57.332789   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:57.332845   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:57.370434   78080 cri.go:89] found id: ""
	I0729 18:30:57.370461   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.370486   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:57.370492   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:57.370547   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:57.420694   78080 cri.go:89] found id: ""
	I0729 18:30:57.420724   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.420735   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:57.420742   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:57.420808   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:57.469245   78080 cri.go:89] found id: ""
	I0729 18:30:57.469271   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.469282   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:57.469288   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:57.469355   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:57.524937   78080 cri.go:89] found id: ""
	I0729 18:30:57.524963   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.524970   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:57.524976   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:57.525031   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:57.566803   78080 cri.go:89] found id: ""
	I0729 18:30:57.566830   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.566840   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:57.566847   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:57.566910   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:57.602786   78080 cri.go:89] found id: ""
	I0729 18:30:57.602814   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.602821   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:57.602826   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:57.602891   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:57.639319   78080 cri.go:89] found id: ""
	I0729 18:30:57.639347   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.639355   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:57.639361   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:57.639408   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:57.672580   78080 cri.go:89] found id: ""
	I0729 18:30:57.672610   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.672621   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:57.672632   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:57.672647   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:57.751550   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:57.751572   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:57.751586   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:57.840057   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:57.840097   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:57.884698   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:57.884737   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:57.944468   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:57.944497   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:00.459605   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:00.473079   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:00.473138   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:00.508492   78080 cri.go:89] found id: ""
	I0729 18:31:00.508525   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.508536   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:00.508543   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:00.508604   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:00.544844   78080 cri.go:89] found id: ""
	I0729 18:31:00.544875   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.544886   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:00.544899   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:00.544960   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:00.578402   78080 cri.go:89] found id: ""
	I0729 18:31:00.578432   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.578443   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:00.578450   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:00.578508   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:00.611886   78080 cri.go:89] found id: ""
	I0729 18:31:00.611913   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.611922   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:00.611928   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:00.611989   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:00.649126   78080 cri.go:89] found id: ""
	I0729 18:31:00.649153   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.649162   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:00.649168   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:00.649229   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:00.686534   78080 cri.go:89] found id: ""
	I0729 18:31:00.686561   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.686571   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:00.686578   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:00.686639   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:00.718656   78080 cri.go:89] found id: ""
	I0729 18:31:00.718680   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.718690   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:00.718696   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:00.718755   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:00.752740   78080 cri.go:89] found id: ""
	I0729 18:31:00.752766   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.752776   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:00.752786   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:00.752800   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:00.804293   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:00.804323   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:00.817988   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:00.818010   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:00.892178   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:00.892210   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:00.892231   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:00.973164   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:00.973199   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:58.434888   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:00.934518   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:00.239908   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:02.240038   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:04.240420   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:01.542080   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:03.542877   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:04.036213   77627 pod_ready.go:81] duration metric: took 4m0.000109353s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" ...
	E0729 18:31:04.036235   77627 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 18:31:04.036250   77627 pod_ready.go:38] duration metric: took 4m10.564329435s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:04.036294   77627 kubeadm.go:597] duration metric: took 4m18.357564209s to restartPrimaryControlPlane
	W0729 18:31:04.036359   77627 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:31:04.036388   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:31:03.512105   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:03.526536   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:03.526602   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:03.561579   78080 cri.go:89] found id: ""
	I0729 18:31:03.561604   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.561614   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:03.561621   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:03.561681   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:03.603995   78080 cri.go:89] found id: ""
	I0729 18:31:03.604019   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.604028   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:03.604033   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:03.604079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:03.640879   78080 cri.go:89] found id: ""
	I0729 18:31:03.640902   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.640910   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:03.640917   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:03.640971   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:03.675262   78080 cri.go:89] found id: ""
	I0729 18:31:03.675288   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.675296   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:03.675302   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:03.675349   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:03.708094   78080 cri.go:89] found id: ""
	I0729 18:31:03.708128   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.708137   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:03.708142   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:03.708190   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:03.748262   78080 cri.go:89] found id: ""
	I0729 18:31:03.748287   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.748298   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:03.748304   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:03.748360   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:03.789758   78080 cri.go:89] found id: ""
	I0729 18:31:03.789788   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.789800   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:03.789806   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:03.789893   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:03.829253   78080 cri.go:89] found id: ""
	I0729 18:31:03.829280   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.829291   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:03.829299   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:03.829317   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:03.883012   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:03.883044   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:03.899264   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:03.899294   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:03.970241   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:03.970261   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:03.970274   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:04.056205   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:04.056244   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:06.604919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:06.619163   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:06.619242   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:06.656939   78080 cri.go:89] found id: ""
	I0729 18:31:06.656970   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.656982   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:06.656989   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:06.657075   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:06.692577   78080 cri.go:89] found id: ""
	I0729 18:31:06.692608   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.692624   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:06.692632   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:06.692695   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:06.730045   78080 cri.go:89] found id: ""
	I0729 18:31:06.730077   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.730088   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:06.730096   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:06.730179   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:06.771794   78080 cri.go:89] found id: ""
	I0729 18:31:06.771820   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.771830   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:06.771838   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:06.771905   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:06.806149   78080 cri.go:89] found id: ""
	I0729 18:31:06.806177   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.806187   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:06.806194   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:06.806252   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:06.851875   78080 cri.go:89] found id: ""
	I0729 18:31:06.851905   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.851923   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:06.851931   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:06.851996   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:06.890335   78080 cri.go:89] found id: ""
	I0729 18:31:06.890382   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.890393   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:06.890399   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:06.890460   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:06.928873   78080 cri.go:89] found id: ""
	I0729 18:31:06.928902   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.928912   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:06.928922   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:06.928935   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:06.944269   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:06.944295   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:07.011658   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:07.011682   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:07.011697   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:07.109899   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:07.109948   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:07.154569   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:07.154600   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:02.935054   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:05.434752   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:06.242994   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:08.738448   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:09.709101   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:09.722387   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:09.722461   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:09.760443   78080 cri.go:89] found id: ""
	I0729 18:31:09.760471   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.760481   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:09.760488   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:09.760551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:09.796177   78080 cri.go:89] found id: ""
	I0729 18:31:09.796200   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.796209   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:09.796214   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:09.796264   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:09.831955   78080 cri.go:89] found id: ""
	I0729 18:31:09.831983   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.831990   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:09.831995   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:09.832055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:09.863913   78080 cri.go:89] found id: ""
	I0729 18:31:09.863939   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.863949   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:09.863956   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:09.864014   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:09.897553   78080 cri.go:89] found id: ""
	I0729 18:31:09.897575   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.897583   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:09.897588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:09.897645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:09.935203   78080 cri.go:89] found id: ""
	I0729 18:31:09.935221   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.935228   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:09.935238   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:09.935296   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:09.971098   78080 cri.go:89] found id: ""
	I0729 18:31:09.971125   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.971135   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:09.971142   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:09.971224   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:10.006760   78080 cri.go:89] found id: ""
	I0729 18:31:10.006794   78080 logs.go:276] 0 containers: []
	W0729 18:31:10.006804   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:10.006815   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:10.006830   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:10.056037   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:10.056066   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:10.070633   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:10.070660   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:10.139953   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:10.139983   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:10.140002   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:10.220748   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:10.220781   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:07.436020   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:09.934218   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:11.934977   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:10.740109   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:13.239440   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:12.766391   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:12.779837   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:12.779889   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:12.813910   78080 cri.go:89] found id: ""
	I0729 18:31:12.813941   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.813951   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:12.813959   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:12.814008   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:12.848811   78080 cri.go:89] found id: ""
	I0729 18:31:12.848854   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.848865   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:12.848872   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:12.848927   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:12.884740   78080 cri.go:89] found id: ""
	I0729 18:31:12.884769   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.884780   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:12.884786   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:12.884833   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:12.923826   78080 cri.go:89] found id: ""
	I0729 18:31:12.923859   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.923870   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:12.923878   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:12.923930   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:12.959127   78080 cri.go:89] found id: ""
	I0729 18:31:12.959157   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.959168   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:12.959175   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:12.959245   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:12.994384   78080 cri.go:89] found id: ""
	I0729 18:31:12.994417   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.994430   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:12.994439   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:12.994506   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:13.027854   78080 cri.go:89] found id: ""
	I0729 18:31:13.027883   78080 logs.go:276] 0 containers: []
	W0729 18:31:13.027892   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:13.027897   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:13.027951   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:13.062270   78080 cri.go:89] found id: ""
	I0729 18:31:13.062300   78080 logs.go:276] 0 containers: []
	W0729 18:31:13.062310   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:13.062321   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:13.062334   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:13.114473   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:13.114500   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:13.127820   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:13.127845   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:13.195830   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:13.195848   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:13.195862   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:13.281711   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:13.281748   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:15.824456   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:15.837532   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:15.837587   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:15.871706   78080 cri.go:89] found id: ""
	I0729 18:31:15.871739   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.871750   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:15.871757   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:15.871817   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:15.906882   78080 cri.go:89] found id: ""
	I0729 18:31:15.906905   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.906912   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:15.906917   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:15.906976   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:15.943015   78080 cri.go:89] found id: ""
	I0729 18:31:15.943043   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.943057   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:15.943065   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:15.943126   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:15.980501   78080 cri.go:89] found id: ""
	I0729 18:31:15.980528   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.980536   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:15.980542   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:15.980588   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:16.014148   78080 cri.go:89] found id: ""
	I0729 18:31:16.014176   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.014183   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:16.014189   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:16.014236   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:16.048296   78080 cri.go:89] found id: ""
	I0729 18:31:16.048319   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.048326   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:16.048334   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:16.048392   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:16.084328   78080 cri.go:89] found id: ""
	I0729 18:31:16.084350   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.084358   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:16.084363   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:16.084411   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:16.120048   78080 cri.go:89] found id: ""
	I0729 18:31:16.120076   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.120084   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:16.120092   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:16.120105   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:16.173476   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:16.173503   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:16.190200   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:16.190232   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:16.261993   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:16.262014   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:16.262026   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:16.340298   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:16.340331   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:14.434706   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:16.936150   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:15.739493   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:18.239834   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:18.883152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:18.897292   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:18.897360   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:18.931276   78080 cri.go:89] found id: ""
	I0729 18:31:18.931303   78080 logs.go:276] 0 containers: []
	W0729 18:31:18.931313   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:18.931321   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:18.931379   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:18.975803   78080 cri.go:89] found id: ""
	I0729 18:31:18.975832   78080 logs.go:276] 0 containers: []
	W0729 18:31:18.975843   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:18.975853   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:18.975912   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:19.012920   78080 cri.go:89] found id: ""
	I0729 18:31:19.012951   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.012963   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:19.012970   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:19.013031   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:19.047640   78080 cri.go:89] found id: ""
	I0729 18:31:19.047667   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.047679   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:19.047687   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:19.047749   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:19.082495   78080 cri.go:89] found id: ""
	I0729 18:31:19.082522   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.082533   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:19.082540   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:19.082591   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:19.117988   78080 cri.go:89] found id: ""
	I0729 18:31:19.118016   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.118027   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:19.118034   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:19.118096   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:19.153725   78080 cri.go:89] found id: ""
	I0729 18:31:19.153753   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.153764   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:19.153771   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:19.153836   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:19.192827   78080 cri.go:89] found id: ""
	I0729 18:31:19.192857   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.192868   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:19.192879   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:19.192894   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:19.208802   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:19.208833   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:19.285877   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:19.285897   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:19.285909   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:19.366563   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:19.366598   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:19.404563   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:19.404590   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:21.958449   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:21.971674   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:21.971739   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:22.006231   78080 cri.go:89] found id: ""
	I0729 18:31:22.006253   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.006261   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:22.006266   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:22.006314   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:22.042575   78080 cri.go:89] found id: ""
	I0729 18:31:22.042599   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.042609   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:22.042616   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:22.042679   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:22.079446   78080 cri.go:89] found id: ""
	I0729 18:31:22.079471   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.079482   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:22.079489   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:22.079554   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:22.115940   78080 cri.go:89] found id: ""
	I0729 18:31:22.115967   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.115976   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:22.115984   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:22.116055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:22.149420   78080 cri.go:89] found id: ""
	I0729 18:31:22.149447   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.149456   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:22.149461   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:22.149511   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:22.182992   78080 cri.go:89] found id: ""
	I0729 18:31:22.183019   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.183027   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:22.183032   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:22.183090   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:22.218441   78080 cri.go:89] found id: ""
	I0729 18:31:22.218474   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.218487   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:22.218497   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:22.218564   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:19.434020   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:21.434806   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:20.739308   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:22.741502   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:22.263135   78080 cri.go:89] found id: ""
	I0729 18:31:22.263164   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.263173   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:22.263183   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:22.263198   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:22.319010   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:22.319049   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:22.333151   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:22.333179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:22.404661   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:22.404683   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:22.404706   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:22.488497   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:22.488537   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:25.032215   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:25.045114   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:25.045191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:25.082244   78080 cri.go:89] found id: ""
	I0729 18:31:25.082278   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.082289   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:25.082299   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:25.082388   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:25.118295   78080 cri.go:89] found id: ""
	I0729 18:31:25.118318   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.118325   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:25.118331   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:25.118395   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:25.157948   78080 cri.go:89] found id: ""
	I0729 18:31:25.157974   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.157984   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:25.157992   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:25.158054   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:25.194708   78080 cri.go:89] found id: ""
	I0729 18:31:25.194734   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.194743   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:25.194751   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:25.194813   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:25.235923   78080 cri.go:89] found id: ""
	I0729 18:31:25.235952   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.235962   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:25.235969   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:25.236032   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:25.271316   78080 cri.go:89] found id: ""
	I0729 18:31:25.271342   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.271353   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:25.271360   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:25.271422   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:25.309399   78080 cri.go:89] found id: ""
	I0729 18:31:25.309427   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.309438   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:25.309446   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:25.309503   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:25.347979   78080 cri.go:89] found id: ""
	I0729 18:31:25.348009   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.348021   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:25.348031   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:25.348046   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:25.400785   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:25.400812   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:25.413891   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:25.413915   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:25.487721   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:25.487752   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:25.487767   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:25.575500   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:25.575531   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:23.935200   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:26.434289   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:25.240961   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:27.738838   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:27.738866   77859 pod_ready.go:81] duration metric: took 4m0.005785253s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	E0729 18:31:27.738877   77859 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 18:31:27.738887   77859 pod_ready.go:38] duration metric: took 4m4.550102816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:27.738903   77859 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:31:27.738934   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:27.738991   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:27.798686   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:27.798710   77859 cri.go:89] found id: ""
	I0729 18:31:27.798717   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:27.798774   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.804769   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:27.804827   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:27.849829   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:27.849849   77859 cri.go:89] found id: ""
	I0729 18:31:27.849857   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:27.849909   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.854472   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:27.854540   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:27.891637   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:27.891659   77859 cri.go:89] found id: ""
	I0729 18:31:27.891668   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:27.891715   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.896663   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:27.896713   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:27.941948   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:27.941968   77859 cri.go:89] found id: ""
	I0729 18:31:27.941976   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:27.942018   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.946770   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:27.946821   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:27.988118   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:27.988139   77859 cri.go:89] found id: ""
	I0729 18:31:27.988147   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:27.988193   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.992474   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:27.992535   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:28.032779   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:28.032801   77859 cri.go:89] found id: ""
	I0729 18:31:28.032811   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:28.032859   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.037791   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:28.037838   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:28.081087   77859 cri.go:89] found id: ""
	I0729 18:31:28.081115   77859 logs.go:276] 0 containers: []
	W0729 18:31:28.081124   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:28.081131   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:28.081183   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:28.123906   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:28.123927   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:28.123933   77859 cri.go:89] found id: ""
	I0729 18:31:28.123940   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:28.123979   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.128737   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.133127   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:28.133201   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:28.182950   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:28.182985   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:28.241873   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:28.241914   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:28.391355   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:28.391389   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:28.447637   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:28.447671   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:28.496815   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:28.496848   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:28.540617   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:28.540651   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:29.063074   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:29.063116   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:29.123348   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:29.123378   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:29.137340   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:29.137365   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:29.174775   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:29.174810   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:29.227526   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:29.227560   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:29.281814   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:29.281844   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:28.121761   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:28.136756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:28.136813   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:28.175461   78080 cri.go:89] found id: ""
	I0729 18:31:28.175491   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.175502   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:28.175509   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:28.175567   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:28.215024   78080 cri.go:89] found id: ""
	I0729 18:31:28.215046   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.215055   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:28.215060   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:28.215122   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:28.253999   78080 cri.go:89] found id: ""
	I0729 18:31:28.254023   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.254031   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:28.254037   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:28.254090   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:28.287902   78080 cri.go:89] found id: ""
	I0729 18:31:28.287929   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.287940   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:28.287948   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:28.288006   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:28.322390   78080 cri.go:89] found id: ""
	I0729 18:31:28.322422   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.322433   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:28.322441   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:28.322500   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:28.356951   78080 cri.go:89] found id: ""
	I0729 18:31:28.356980   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.356991   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:28.356999   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:28.357060   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:28.393439   78080 cri.go:89] found id: ""
	I0729 18:31:28.393461   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.393471   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:28.393477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:28.393535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:28.431827   78080 cri.go:89] found id: ""
	I0729 18:31:28.431858   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.431868   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:28.431878   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:28.431892   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:28.509279   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:28.509315   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:28.564036   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:28.564064   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:28.626970   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:28.627000   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:28.641417   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:28.641446   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:28.713406   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:31.213942   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:31.228942   78080 kubeadm.go:597] duration metric: took 4m3.040952507s to restartPrimaryControlPlane
	W0729 18:31:31.229020   78080 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:31:31.229042   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:31:31.696335   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:31.711230   78080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:31:31.720924   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:31:31.730348   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:31:31.730378   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:31:31.730418   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:31:31.739761   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:31:31.739810   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:31:31.749021   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:31:31.758107   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:31:31.758155   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:31:31.768326   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:31:31.777347   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:31:31.777388   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:31:31.786752   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:31:31.795728   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:31:31.795776   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:31:31.805369   78080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:31:31.883678   78080 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:31:31.883751   78080 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:31:32.040989   78080 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:31:32.041127   78080 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:31:32.041259   78080 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:31:32.261525   78080 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:31:28.434784   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:30.435227   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:32.263137   78080 out.go:204]   - Generating certificates and keys ...
	I0729 18:31:32.263242   78080 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:31:32.263349   78080 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:31:32.263461   78080 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:31:32.263554   78080 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:31:32.263640   78080 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:31:32.263724   78080 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:31:32.263801   78080 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:31:32.263872   78080 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:31:32.263993   78080 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:31:32.264109   78080 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:31:32.264164   78080 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:31:32.264255   78080 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:31:32.435248   78080 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:31:32.509478   78080 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:31:32.737003   78080 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:31:33.079523   78080 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:31:33.099871   78080 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:31:33.101450   78080 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:31:33.101520   78080 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:31:33.242577   78080 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:31:31.826678   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:31.845448   77859 api_server.go:72] duration metric: took 4m16.365262679s to wait for apiserver process to appear ...
	I0729 18:31:31.845478   77859 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:31:31.845519   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:31.845568   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:31.889194   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:31.889226   77859 cri.go:89] found id: ""
	I0729 18:31:31.889236   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:31.889290   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.894167   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:31.894271   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:31.936287   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:31.936306   77859 cri.go:89] found id: ""
	I0729 18:31:31.936315   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:31.936367   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.941051   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:31.941110   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:31.978033   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:31.978057   77859 cri.go:89] found id: ""
	I0729 18:31:31.978066   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:31.978115   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.982632   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:31.982704   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:32.023792   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:32.023812   77859 cri.go:89] found id: ""
	I0729 18:31:32.023820   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:32.023875   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.028309   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:32.028367   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:32.071944   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:32.071966   77859 cri.go:89] found id: ""
	I0729 18:31:32.071975   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:32.072033   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.076171   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:32.076252   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:32.111357   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:32.111379   77859 cri.go:89] found id: ""
	I0729 18:31:32.111389   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:32.111446   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.115718   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:32.115775   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:32.168552   77859 cri.go:89] found id: ""
	I0729 18:31:32.168586   77859 logs.go:276] 0 containers: []
	W0729 18:31:32.168597   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:32.168604   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:32.168686   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:32.210002   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:32.210027   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:32.210034   77859 cri.go:89] found id: ""
	I0729 18:31:32.210043   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:32.210090   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.214929   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.220097   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:32.220121   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:32.270343   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:32.270384   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:32.329269   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:32.329303   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:32.388361   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:32.388388   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:32.430072   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:32.430108   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:32.471669   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:32.471701   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:32.508395   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:32.508424   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:32.548968   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:32.549001   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:32.605269   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:32.605306   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:32.642298   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:32.642330   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:32.659407   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:32.659431   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:32.776509   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:32.776544   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:32.832365   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:32.832395   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:35.748109   77627 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.711694865s)
	I0729 18:31:35.748184   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:35.765137   77627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:31:35.775945   77627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:31:35.786206   77627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:31:35.786232   77627 kubeadm.go:157] found existing configuration files:
	
	I0729 18:31:35.786284   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:31:35.797157   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:31:35.797218   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:31:35.810497   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:31:35.821537   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:31:35.821603   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:31:35.832985   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:31:35.842247   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:31:35.842309   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:31:35.852578   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:31:35.861798   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:31:35.861858   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:31:35.872903   77627 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:31:35.926675   77627 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 18:31:35.926872   77627 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:31:36.089002   77627 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:31:36.089179   77627 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:31:36.089310   77627 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:31:36.321844   77627 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:31:33.244436   78080 out.go:204]   - Booting up control plane ...
	I0729 18:31:33.244570   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:31:33.245677   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:31:33.249530   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:31:33.250262   78080 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:31:33.261418   78080 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:31:36.324255   77627 out.go:204]   - Generating certificates and keys ...
	I0729 18:31:36.324352   77627 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:31:36.324435   77627 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:31:36.324539   77627 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:31:36.324619   77627 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:31:36.324707   77627 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:31:36.324780   77627 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:31:36.324864   77627 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:31:36.324945   77627 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:31:36.325036   77627 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:31:36.325175   77627 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:31:36.325340   77627 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:31:36.325425   77627 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:31:36.815491   77627 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:31:36.870914   77627 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:31:36.957705   77627 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:31:37.074845   77627 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:31:37.220920   77627 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:31:37.221651   77627 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:31:37.224384   77627 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:31:32.435653   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:34.933615   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:36.935070   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:35.792366   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:31:35.801160   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 200:
	ok
	I0729 18:31:35.804043   77859 api_server.go:141] control plane version: v1.30.3
	I0729 18:31:35.804063   77859 api_server.go:131] duration metric: took 3.958578435s to wait for apiserver health ...
	I0729 18:31:35.804072   77859 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:31:35.804099   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:35.804140   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:35.845977   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:35.846003   77859 cri.go:89] found id: ""
	I0729 18:31:35.846018   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:35.846072   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.851227   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:35.851302   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:35.892117   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:35.892142   77859 cri.go:89] found id: ""
	I0729 18:31:35.892158   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:35.892215   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.897136   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:35.897216   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:35.941512   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:35.941532   77859 cri.go:89] found id: ""
	I0729 18:31:35.941541   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:35.941598   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.946072   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:35.946124   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:35.984306   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:35.984327   77859 cri.go:89] found id: ""
	I0729 18:31:35.984335   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:35.984381   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.988605   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:35.988671   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:36.031476   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:36.031504   77859 cri.go:89] found id: ""
	I0729 18:31:36.031514   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:36.031567   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.037262   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:36.037319   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:36.078054   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:36.078076   77859 cri.go:89] found id: ""
	I0729 18:31:36.078084   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:36.078134   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.082628   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:36.082693   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:36.122768   77859 cri.go:89] found id: ""
	I0729 18:31:36.122791   77859 logs.go:276] 0 containers: []
	W0729 18:31:36.122799   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:36.122804   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:36.122849   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:36.166611   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:36.166636   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:36.166642   77859 cri.go:89] found id: ""
	I0729 18:31:36.166650   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:36.166712   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.171240   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.175336   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:36.175354   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:36.233224   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:36.233255   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:36.282788   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:36.282820   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:36.675615   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:36.675660   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:36.731559   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:36.731602   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:36.747814   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:36.747845   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:36.786940   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:36.787036   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:36.829659   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:36.829694   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:36.865907   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:36.865939   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:36.908399   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:36.908427   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:37.012220   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:37.012255   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:37.063429   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:37.063463   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:37.107615   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:37.107654   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:39.655973   77859 system_pods.go:59] 8 kube-system pods found
	I0729 18:31:39.656011   77859 system_pods.go:61] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running
	I0729 18:31:39.656019   77859 system_pods.go:61] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running
	I0729 18:31:39.656025   77859 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running
	I0729 18:31:39.656032   77859 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running
	I0729 18:31:39.656037   77859 system_pods.go:61] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running
	I0729 18:31:39.656043   77859 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running
	I0729 18:31:39.656051   77859 system_pods.go:61] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:31:39.656057   77859 system_pods.go:61] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running
	I0729 18:31:39.656068   77859 system_pods.go:74] duration metric: took 3.851988452s to wait for pod list to return data ...
	I0729 18:31:39.656081   77859 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:31:39.658999   77859 default_sa.go:45] found service account: "default"
	I0729 18:31:39.659024   77859 default_sa.go:55] duration metric: took 2.935237ms for default service account to be created ...
	I0729 18:31:39.659034   77859 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:31:39.664926   77859 system_pods.go:86] 8 kube-system pods found
	I0729 18:31:39.664952   77859 system_pods.go:89] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running
	I0729 18:31:39.664959   77859 system_pods.go:89] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running
	I0729 18:31:39.664966   77859 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running
	I0729 18:31:39.664973   77859 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running
	I0729 18:31:39.664979   77859 system_pods.go:89] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running
	I0729 18:31:39.664987   77859 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running
	I0729 18:31:39.665003   77859 system_pods.go:89] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:31:39.665013   77859 system_pods.go:89] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running
	I0729 18:31:39.665025   77859 system_pods.go:126] duration metric: took 5.974722ms to wait for k8s-apps to be running ...
	I0729 18:31:39.665036   77859 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:31:39.665093   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:39.685280   77859 system_svc.go:56] duration metric: took 20.237099ms WaitForService to wait for kubelet
	I0729 18:31:39.685311   77859 kubeadm.go:582] duration metric: took 4m24.205126513s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:31:39.685336   77859 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:31:39.688419   77859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:31:39.688441   77859 node_conditions.go:123] node cpu capacity is 2
	I0729 18:31:39.688455   77859 node_conditions.go:105] duration metric: took 3.111768ms to run NodePressure ...
	I0729 18:31:39.688470   77859 start.go:241] waiting for startup goroutines ...
	I0729 18:31:39.688483   77859 start.go:246] waiting for cluster config update ...
	I0729 18:31:39.688497   77859 start.go:255] writing updated cluster config ...
	I0729 18:31:39.688830   77859 ssh_runner.go:195] Run: rm -f paused
	I0729 18:31:39.739685   77859 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:31:39.741763   77859 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-502055" cluster and "default" namespace by default
	I0729 18:31:37.226046   77627 out.go:204]   - Booting up control plane ...
	I0729 18:31:37.226163   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:31:37.227852   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:31:37.228710   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:31:37.248177   77627 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:31:37.248863   77627 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:31:37.248915   77627 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:31:37.376905   77627 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:31:37.377030   77627 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:31:37.878928   77627 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.066447ms
	I0729 18:31:37.879057   77627 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:31:38.935622   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:41.433736   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:42.880479   77627 kubeadm.go:310] [api-check] The API server is healthy after 5.001345894s
	I0729 18:31:42.892513   77627 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:31:42.910175   77627 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:31:42.948111   77627 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:31:42.948340   77627 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-409322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:31:42.966823   77627 kubeadm.go:310] [bootstrap-token] Using token: f8a98i.3r2is78gllm02lfe
	I0729 18:31:42.968170   77627 out.go:204]   - Configuring RBAC rules ...
	I0729 18:31:42.968304   77627 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:31:42.978257   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:31:42.986458   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:31:42.989744   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:31:42.992484   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:31:42.995162   77627 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:31:43.287739   77627 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:31:43.726370   77627 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:31:44.290225   77627 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:31:44.291166   77627 kubeadm.go:310] 
	I0729 18:31:44.291267   77627 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:31:44.291278   77627 kubeadm.go:310] 
	I0729 18:31:44.291392   77627 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:31:44.291401   77627 kubeadm.go:310] 
	I0729 18:31:44.291436   77627 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:31:44.291530   77627 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:31:44.291589   77627 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:31:44.291606   77627 kubeadm.go:310] 
	I0729 18:31:44.291701   77627 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:31:44.291713   77627 kubeadm.go:310] 
	I0729 18:31:44.291788   77627 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:31:44.291797   77627 kubeadm.go:310] 
	I0729 18:31:44.291860   77627 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:31:44.291954   77627 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:31:44.292052   77627 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:31:44.292070   77627 kubeadm.go:310] 
	I0729 18:31:44.292167   77627 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:31:44.292269   77627 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:31:44.292280   77627 kubeadm.go:310] 
	I0729 18:31:44.292402   77627 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f8a98i.3r2is78gllm02lfe \
	I0729 18:31:44.292543   77627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 18:31:44.292585   77627 kubeadm.go:310] 	--control-plane 
	I0729 18:31:44.292595   77627 kubeadm.go:310] 
	I0729 18:31:44.292710   77627 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:31:44.292732   77627 kubeadm.go:310] 
	I0729 18:31:44.292836   77627 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f8a98i.3r2is78gllm02lfe \
	I0729 18:31:44.293015   77627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 18:31:44.293440   77627 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:31:44.293500   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:31:44.293512   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:31:44.295432   77627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:31:44.296845   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:31:44.308178   77627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:31:44.334403   77627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:31:44.334542   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:44.334562   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-409322 minikube.k8s.io/updated_at=2024_07_29T18_31_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=embed-certs-409322 minikube.k8s.io/primary=true
	I0729 18:31:44.366345   77627 ops.go:34] apiserver oom_adj: -16
	I0729 18:31:44.537970   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:43.433884   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:45.434714   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:45.039020   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:45.538831   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:46.038700   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:46.538761   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.038725   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.538100   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:48.038309   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:48.538896   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:49.039011   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:49.538333   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.435067   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:49.934658   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:50.038548   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:50.538590   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:51.038131   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:51.538253   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.038599   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.538827   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:53.038077   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:53.538860   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:54.038530   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:54.538952   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.433783   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:54.434442   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:56.434864   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:55.038263   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:55.538050   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:56.038006   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:56.538079   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.038042   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.538146   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.696274   77627 kubeadm.go:1113] duration metric: took 13.36179604s to wait for elevateKubeSystemPrivileges
	I0729 18:31:57.696308   77627 kubeadm.go:394] duration metric: took 5m12.066483926s to StartCluster
	I0729 18:31:57.696324   77627 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:31:57.696406   77627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:31:57.698195   77627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:31:57.698479   77627 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:31:57.698592   77627 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:31:57.698674   77627 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-409322"
	I0729 18:31:57.698688   77627 addons.go:69] Setting metrics-server=true in profile "embed-certs-409322"
	I0729 18:31:57.698695   77627 addons.go:69] Setting default-storageclass=true in profile "embed-certs-409322"
	I0729 18:31:57.698714   77627 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-409322"
	I0729 18:31:57.698719   77627 addons.go:234] Setting addon metrics-server=true in "embed-certs-409322"
	W0729 18:31:57.698723   77627 addons.go:243] addon storage-provisioner should already be in state true
	W0729 18:31:57.698729   77627 addons.go:243] addon metrics-server should already be in state true
	I0729 18:31:57.698733   77627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-409322"
	I0729 18:31:57.698755   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.698676   77627 config.go:182] Loaded profile config "embed-certs-409322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:31:57.698760   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.699157   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699169   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699207   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.699170   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699229   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.699209   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.700201   77627 out.go:177] * Verifying Kubernetes components...
	I0729 18:31:57.701577   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:31:57.715130   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44873
	I0729 18:31:57.715156   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I0729 18:31:57.715708   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.715759   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.716320   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.716329   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.716344   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.716345   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.716666   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.716672   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.716868   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.717251   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.717283   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.717715   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41041
	I0729 18:31:57.718172   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.718684   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.718709   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.719111   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.719630   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.719670   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.720815   77627 addons.go:234] Setting addon default-storageclass=true in "embed-certs-409322"
	W0729 18:31:57.720839   77627 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:31:57.720870   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.721233   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.721264   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.733757   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0729 18:31:57.734325   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.735372   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.735397   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.735736   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.735928   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.735939   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0729 18:31:57.736244   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.736923   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.736942   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.737318   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.737664   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.739761   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.740354   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.741103   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
	I0729 18:31:57.741489   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.741979   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.741999   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.742296   77627 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:31:57.742348   77627 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:31:57.742400   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.743411   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.743443   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.743498   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:31:57.743515   77627 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:31:57.743537   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.743682   77627 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:31:57.743697   77627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:31:57.743711   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.748331   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.748743   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.748759   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.748941   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.748986   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.749110   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.749290   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.749423   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.749638   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.749650   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.749671   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.749834   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.749940   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.750051   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.760794   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I0729 18:31:57.761136   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.761574   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.761585   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.761954   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.762133   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.764344   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.764532   77627 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:31:57.764541   77627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:31:57.764555   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.767111   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.767485   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.767498   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.767625   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.767763   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.767875   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.768004   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.965911   77627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:31:57.986557   77627 node_ready.go:35] waiting up to 6m0s for node "embed-certs-409322" to be "Ready" ...
	I0729 18:31:57.995790   77627 node_ready.go:49] node "embed-certs-409322" has status "Ready":"True"
	I0729 18:31:57.995809   77627 node_ready.go:38] duration metric: took 9.222398ms for node "embed-certs-409322" to be "Ready" ...
	I0729 18:31:57.995817   77627 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:58.003516   77627 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace to be "Ready" ...
	I0729 18:31:58.047522   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:31:58.053274   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:31:58.053290   77627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:31:58.074101   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:31:58.074127   77627 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:31:58.088159   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:31:58.097491   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:31:58.097518   77627 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:31:58.125335   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:31:58.628396   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628425   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628466   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628480   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628847   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.628909   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.628918   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.628936   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.628946   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628955   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628914   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.628898   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.629017   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.629046   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.629268   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.629281   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.630616   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.630636   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.630649   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.660029   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.660061   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.660339   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.660358   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.975389   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.975414   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.975721   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.975740   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.975750   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.975760   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.976034   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.976051   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.976063   77627 addons.go:475] Verifying addon metrics-server=true in "embed-certs-409322"
	I0729 18:31:58.978172   77627 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 18:31:58.979568   77627 addons.go:510] duration metric: took 1.280977366s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 18:31:58.935700   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:00.935984   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:00.009825   77627 pod_ready.go:92] pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:00.009846   77627 pod_ready.go:81] duration metric: took 2.006300447s for pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:00.009855   77627 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:02.016463   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:04.515885   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:03.432654   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:05.434708   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:06.517308   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:09.016256   77627 pod_ready.go:92] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.016276   77627 pod_ready.go:81] duration metric: took 9.006414116s for pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.016287   77627 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.021639   77627 pod_ready.go:92] pod "etcd-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.021661   77627 pod_ready.go:81] duration metric: took 5.365088ms for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.021672   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.026599   77627 pod_ready.go:92] pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.026618   77627 pod_ready.go:81] duration metric: took 4.939458ms for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.026629   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.031994   77627 pod_ready.go:92] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.032009   77627 pod_ready.go:81] duration metric: took 5.37307ms for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.032020   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kxf5z" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.036180   77627 pod_ready.go:92] pod "kube-proxy-kxf5z" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.036196   77627 pod_ready.go:81] duration metric: took 4.16934ms for pod "kube-proxy-kxf5z" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.036205   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.414950   77627 pod_ready.go:92] pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.414973   77627 pod_ready.go:81] duration metric: took 378.76116ms for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.414981   77627 pod_ready.go:38] duration metric: took 11.419116871s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:09.414995   77627 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:32:09.415042   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:32:09.434210   77627 api_server.go:72] duration metric: took 11.735691998s to wait for apiserver process to appear ...
	I0729 18:32:09.434240   77627 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:32:09.434260   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:32:09.439755   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I0729 18:32:09.440612   77627 api_server.go:141] control plane version: v1.30.3
	I0729 18:32:09.440631   77627 api_server.go:131] duration metric: took 6.382802ms to wait for apiserver health ...
	I0729 18:32:09.440640   77627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:32:09.617533   77627 system_pods.go:59] 9 kube-system pods found
	I0729 18:32:09.617564   77627 system_pods.go:61] "coredns-7db6d8ff4d-wpnfg" [687cbc8f-370a-4b72-bc1c-6ae36efe890e] Running
	I0729 18:32:09.617569   77627 system_pods.go:61] "coredns-7db6d8ff4d-wztpj" [1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7] Running
	I0729 18:32:09.617572   77627 system_pods.go:61] "etcd-embed-certs-409322" [68de54c3-7d47-4e79-a064-08b013b1d910] Running
	I0729 18:32:09.617575   77627 system_pods.go:61] "kube-apiserver-embed-certs-409322" [dc1a0568-ef7c-493f-91fb-7438456daf6d] Running
	I0729 18:32:09.617579   77627 system_pods.go:61] "kube-controller-manager-embed-certs-409322" [da715e8c-2437-487b-b4e0-c93af2f079f7] Running
	I0729 18:32:09.617582   77627 system_pods.go:61] "kube-proxy-kxf5z" [74ed1812-b3bf-429d-b8f1-bdccb3415fb5] Running
	I0729 18:32:09.617584   77627 system_pods.go:61] "kube-scheduler-embed-certs-409322" [188cf21a-9a8a-45de-9a91-9e593626ce6d] Running
	I0729 18:32:09.617591   77627 system_pods.go:61] "metrics-server-569cc877fc-6q4nl" [57dc61cc-7490-49e5-9d03-c81aa5d25aea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:32:09.617596   77627 system_pods.go:61] "storage-provisioner" [b0b1e31d-9b5c-4e82-aea7-56184832c053] Running
	I0729 18:32:09.617604   77627 system_pods.go:74] duration metric: took 176.958452ms to wait for pod list to return data ...
	I0729 18:32:09.617614   77627 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:32:09.813846   77627 default_sa.go:45] found service account: "default"
	I0729 18:32:09.813871   77627 default_sa.go:55] duration metric: took 196.249412ms for default service account to be created ...
	I0729 18:32:09.813886   77627 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:32:10.019167   77627 system_pods.go:86] 9 kube-system pods found
	I0729 18:32:10.019199   77627 system_pods.go:89] "coredns-7db6d8ff4d-wpnfg" [687cbc8f-370a-4b72-bc1c-6ae36efe890e] Running
	I0729 18:32:10.019208   77627 system_pods.go:89] "coredns-7db6d8ff4d-wztpj" [1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7] Running
	I0729 18:32:10.019214   77627 system_pods.go:89] "etcd-embed-certs-409322" [68de54c3-7d47-4e79-a064-08b013b1d910] Running
	I0729 18:32:10.019220   77627 system_pods.go:89] "kube-apiserver-embed-certs-409322" [dc1a0568-ef7c-493f-91fb-7438456daf6d] Running
	I0729 18:32:10.019227   77627 system_pods.go:89] "kube-controller-manager-embed-certs-409322" [da715e8c-2437-487b-b4e0-c93af2f079f7] Running
	I0729 18:32:10.019233   77627 system_pods.go:89] "kube-proxy-kxf5z" [74ed1812-b3bf-429d-b8f1-bdccb3415fb5] Running
	I0729 18:32:10.019239   77627 system_pods.go:89] "kube-scheduler-embed-certs-409322" [188cf21a-9a8a-45de-9a91-9e593626ce6d] Running
	I0729 18:32:10.019249   77627 system_pods.go:89] "metrics-server-569cc877fc-6q4nl" [57dc61cc-7490-49e5-9d03-c81aa5d25aea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:32:10.019257   77627 system_pods.go:89] "storage-provisioner" [b0b1e31d-9b5c-4e82-aea7-56184832c053] Running
	I0729 18:32:10.019267   77627 system_pods.go:126] duration metric: took 205.375742ms to wait for k8s-apps to be running ...
	I0729 18:32:10.019278   77627 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:32:10.019326   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:32:10.034632   77627 system_svc.go:56] duration metric: took 15.345747ms WaitForService to wait for kubelet
	I0729 18:32:10.034659   77627 kubeadm.go:582] duration metric: took 12.336145267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:32:10.034687   77627 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:32:10.214205   77627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:32:10.214240   77627 node_conditions.go:123] node cpu capacity is 2
	I0729 18:32:10.214255   77627 node_conditions.go:105] duration metric: took 179.559492ms to run NodePressure ...
	I0729 18:32:10.214269   77627 start.go:241] waiting for startup goroutines ...
	I0729 18:32:10.214279   77627 start.go:246] waiting for cluster config update ...
	I0729 18:32:10.214297   77627 start.go:255] writing updated cluster config ...
	I0729 18:32:10.214639   77627 ssh_runner.go:195] Run: rm -f paused
	I0729 18:32:10.264858   77627 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:32:10.266718   77627 out.go:177] * Done! kubectl is now configured to use "embed-certs-409322" cluster and "default" namespace by default
	I0729 18:32:07.934519   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:10.434593   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:13.262907   78080 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:32:13.263487   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:13.263679   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:12.934686   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:13.928481   77394 pod_ready.go:81] duration metric: took 4m0.00080059s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" ...
	E0729 18:32:13.928509   77394 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 18:32:13.928528   77394 pod_ready.go:38] duration metric: took 4m10.042077465s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:13.928554   77394 kubeadm.go:597] duration metric: took 4m18.205651497s to restartPrimaryControlPlane
	W0729 18:32:13.928623   77394 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:32:13.928649   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:32:18.264261   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:18.264554   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:28.265190   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:28.265433   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:40.226240   77394 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.297571665s)
	I0729 18:32:40.226316   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:32:40.243407   77394 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:32:40.254946   77394 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:32:40.264608   77394 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:32:40.264631   77394 kubeadm.go:157] found existing configuration files:
	
	I0729 18:32:40.264675   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:32:40.274180   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:32:40.274231   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:32:40.283752   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:32:40.293163   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:32:40.293232   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:32:40.302533   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:32:40.311972   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:32:40.312024   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:32:40.321513   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:32:40.330546   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:32:40.330599   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:32:40.340190   77394 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:32:40.389517   77394 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 18:32:40.389592   77394 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:32:40.508682   77394 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:32:40.508783   77394 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:32:40.508859   77394 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 18:32:40.517673   77394 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:32:40.520623   77394 out.go:204]   - Generating certificates and keys ...
	I0729 18:32:40.520726   77394 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:32:40.520824   77394 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:32:40.520893   77394 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:32:40.520961   77394 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:32:40.521045   77394 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:32:40.521094   77394 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:32:40.521171   77394 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:32:40.521254   77394 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:32:40.521357   77394 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:32:40.521475   77394 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:32:40.521535   77394 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:32:40.521606   77394 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:32:40.615870   77394 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:32:40.837902   77394 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:32:40.924418   77394 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:32:41.068573   77394 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:32:41.287201   77394 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:32:41.287991   77394 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:32:41.293523   77394 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:32:41.295211   77394 out.go:204]   - Booting up control plane ...
	I0729 18:32:41.295329   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:32:41.295455   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:32:41.295560   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:32:41.317802   77394 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:32:41.324522   77394 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:32:41.324589   77394 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:32:41.463007   77394 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:32:41.463116   77394 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:32:41.982144   77394 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 519.208408ms
	I0729 18:32:41.982263   77394 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:32:46.983564   77394 kubeadm.go:310] [api-check] The API server is healthy after 5.001335599s
	I0729 18:32:46.999811   77394 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:32:47.018194   77394 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:32:47.051359   77394 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:32:47.051564   77394 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-888056 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:32:47.062615   77394 kubeadm.go:310] [bootstrap-token] Using token: a14u5x.5d4oe8yqdl9tiifc
	I0729 18:32:47.064051   77394 out.go:204]   - Configuring RBAC rules ...
	I0729 18:32:47.064187   77394 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:32:47.071856   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:32:47.084985   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:32:47.088622   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:32:47.091797   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:32:47.096194   77394 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:32:47.391394   77394 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:32:47.834314   77394 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:32:48.394665   77394 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:32:48.394689   77394 kubeadm.go:310] 
	I0729 18:32:48.394763   77394 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:32:48.394797   77394 kubeadm.go:310] 
	I0729 18:32:48.394928   77394 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:32:48.394941   77394 kubeadm.go:310] 
	I0729 18:32:48.394979   77394 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:32:48.395058   77394 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:32:48.395126   77394 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:32:48.395141   77394 kubeadm.go:310] 
	I0729 18:32:48.395221   77394 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:32:48.395230   77394 kubeadm.go:310] 
	I0729 18:32:48.395297   77394 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:32:48.395306   77394 kubeadm.go:310] 
	I0729 18:32:48.395374   77394 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:32:48.395467   77394 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:32:48.395554   77394 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:32:48.395563   77394 kubeadm.go:310] 
	I0729 18:32:48.395652   77394 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:32:48.395766   77394 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:32:48.395778   77394 kubeadm.go:310] 
	I0729 18:32:48.395886   77394 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a14u5x.5d4oe8yqdl9tiifc \
	I0729 18:32:48.396030   77394 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 18:32:48.396062   77394 kubeadm.go:310] 	--control-plane 
	I0729 18:32:48.396071   77394 kubeadm.go:310] 
	I0729 18:32:48.396191   77394 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:32:48.396200   77394 kubeadm.go:310] 
	I0729 18:32:48.396276   77394 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a14u5x.5d4oe8yqdl9tiifc \
	I0729 18:32:48.396393   77394 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 18:32:48.397540   77394 kubeadm.go:310] W0729 18:32:40.358164    2949 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 18:32:48.397921   77394 kubeadm.go:310] W0729 18:32:40.359840    2949 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 18:32:48.398071   77394 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:32:48.398090   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:32:48.398099   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:32:48.399641   77394 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:32:48.266531   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:48.266736   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:48.400846   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:32:48.412594   77394 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:32:48.434792   77394 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:32:48.434872   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:48.434907   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-888056 minikube.k8s.io/updated_at=2024_07_29T18_32_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=no-preload-888056 minikube.k8s.io/primary=true
	I0729 18:32:48.672892   77394 ops.go:34] apiserver oom_adj: -16
	I0729 18:32:48.673144   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:49.173811   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:49.673775   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:50.173717   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:50.673774   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:51.174068   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:51.673565   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:52.173431   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:52.673602   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:53.173912   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:53.315565   77394 kubeadm.go:1113] duration metric: took 4.880757535s to wait for elevateKubeSystemPrivileges
	I0729 18:32:53.315609   77394 kubeadm.go:394] duration metric: took 4m57.645527986s to StartCluster
	I0729 18:32:53.315633   77394 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:32:53.315736   77394 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:32:53.317360   77394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:32:53.317579   77394 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:32:53.317669   77394 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:32:53.317784   77394 addons.go:69] Setting storage-provisioner=true in profile "no-preload-888056"
	I0729 18:32:53.317820   77394 addons.go:234] Setting addon storage-provisioner=true in "no-preload-888056"
	I0729 18:32:53.317817   77394 addons.go:69] Setting default-storageclass=true in profile "no-preload-888056"
	W0729 18:32:53.317835   77394 addons.go:243] addon storage-provisioner should already be in state true
	I0729 18:32:53.317840   77394 config.go:182] Loaded profile config "no-preload-888056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:32:53.317836   77394 addons.go:69] Setting metrics-server=true in profile "no-preload-888056"
	I0729 18:32:53.317861   77394 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-888056"
	I0729 18:32:53.317878   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.317882   77394 addons.go:234] Setting addon metrics-server=true in "no-preload-888056"
	W0729 18:32:53.317892   77394 addons.go:243] addon metrics-server should already be in state true
	I0729 18:32:53.317927   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.318302   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318308   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318334   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.318345   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.318301   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318441   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.319022   77394 out.go:177] * Verifying Kubernetes components...
	I0729 18:32:53.320383   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:32:53.335666   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0729 18:32:53.336170   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.336860   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.336896   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.337301   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.338104   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I0729 18:32:53.338137   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40655
	I0729 18:32:53.338545   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.338559   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.338595   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.338614   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.339076   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.339094   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.339163   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.339188   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.339510   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.340089   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.340126   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.340346   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.340557   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.344286   77394 addons.go:234] Setting addon default-storageclass=true in "no-preload-888056"
	W0729 18:32:53.344307   77394 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:32:53.344335   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.344702   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.344727   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.356006   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33765
	I0729 18:32:53.356613   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.357135   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.357159   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.357517   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.357604   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0729 18:32:53.357752   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.358011   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.358472   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.358490   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.358898   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.359110   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.359546   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.360493   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.361662   77394 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:32:53.362464   77394 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:32:53.363294   77394 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:32:53.363311   77394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:32:53.363331   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.364170   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:32:53.364182   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I0729 18:32:53.364186   77394 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:32:53.364205   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.364560   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.365040   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.365061   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.365515   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.365963   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.365983   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.367883   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.368768   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369264   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.369284   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369576   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.369591   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369858   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.369964   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.370009   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.370102   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.370169   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.370198   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.370317   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.370344   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.382571   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0729 18:32:53.382940   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.383311   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.383336   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.383748   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.383946   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.385570   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.385761   77394 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:32:53.385775   77394 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:32:53.385792   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.388411   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.388756   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.388774   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.389017   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.389193   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.389350   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.389463   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.585542   77394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:32:53.645556   77394 node_ready.go:35] waiting up to 6m0s for node "no-preload-888056" to be "Ready" ...
	I0729 18:32:53.657965   77394 node_ready.go:49] node "no-preload-888056" has status "Ready":"True"
	I0729 18:32:53.657997   77394 node_ready.go:38] duration metric: took 12.408834ms for node "no-preload-888056" to be "Ready" ...
	I0729 18:32:53.658010   77394 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:53.673068   77394 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:53.724224   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:32:53.724248   77394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:32:53.763536   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:32:53.774123   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:32:53.812615   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:32:53.812639   77394 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:32:53.945274   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:32:53.945303   77394 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:32:54.107180   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:32:54.184354   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.184379   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.184699   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.184748   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.184762   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.184776   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.184786   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.185015   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.185043   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.185077   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.244759   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.244781   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.245108   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.245156   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.245169   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.782604   77394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.008443119s)
	I0729 18:32:54.782663   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.782676   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.782990   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.783010   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.783020   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.783028   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.783265   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.783283   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946051   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.946074   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.946396   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.946418   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946430   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.946439   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.946680   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.946698   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946710   77394 addons.go:475] Verifying addon metrics-server=true in "no-preload-888056"
	I0729 18:32:54.948362   77394 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 18:32:54.949821   77394 addons.go:510] duration metric: took 1.632153415s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 18:32:55.679655   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:57.680175   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:33:00.179877   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:33:01.180068   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.180094   77394 pod_ready.go:81] duration metric: took 7.506992362s for pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.180106   77394 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.185742   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.185760   77394 pod_ready.go:81] duration metric: took 5.647157ms for pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.185769   77394 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.190056   77394 pod_ready.go:92] pod "etcd-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.190077   77394 pod_ready.go:81] duration metric: took 4.30181ms for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.190085   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.194255   77394 pod_ready.go:92] pod "kube-apiserver-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.194273   77394 pod_ready.go:81] duration metric: took 4.182006ms for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.194284   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.199056   77394 pod_ready.go:92] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.199072   77394 pod_ready.go:81] duration metric: took 4.779158ms for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.199081   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-94ff9" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.578279   77394 pod_ready.go:92] pod "kube-proxy-94ff9" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.578299   77394 pod_ready.go:81] duration metric: took 379.211109ms for pod "kube-proxy-94ff9" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.578308   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:02.378184   77394 pod_ready.go:92] pod "kube-scheduler-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:02.378205   77394 pod_ready.go:81] duration metric: took 799.890202ms for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:02.378212   77394 pod_ready.go:38] duration metric: took 8.720189182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:33:02.378226   77394 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:33:02.378282   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:33:02.396023   77394 api_server.go:72] duration metric: took 9.07841179s to wait for apiserver process to appear ...
	I0729 18:33:02.396050   77394 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:33:02.396070   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:33:02.403736   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 200:
	ok
	I0729 18:33:02.404828   77394 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 18:33:02.404850   77394 api_server.go:131] duration metric: took 8.793481ms to wait for apiserver health ...
	I0729 18:33:02.404858   77394 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:33:02.580656   77394 system_pods.go:59] 9 kube-system pods found
	I0729 18:33:02.580683   77394 system_pods.go:61] "coredns-5cfdc65f69-bbh6c" [66b43af3-78eb-437f-81d7-eedb4cc34349] Running
	I0729 18:33:02.580687   77394 system_pods.go:61] "coredns-5cfdc65f69-j9ddw" [679f8750-86aa-4e00-8291-6996b54b1930] Running
	I0729 18:33:02.580691   77394 system_pods.go:61] "etcd-no-preload-888056" [abcd648d-659a-4f02-a769-f2222eaac945] Running
	I0729 18:33:02.580695   77394 system_pods.go:61] "kube-apiserver-no-preload-888056" [99a48803-06b1-44a6-a0cc-f28f2ba7235f] Running
	I0729 18:33:02.580699   77394 system_pods.go:61] "kube-controller-manager-no-preload-888056" [6bb3d64c-9fef-41ee-a68d-170fac01dec5] Running
	I0729 18:33:02.580702   77394 system_pods.go:61] "kube-proxy-94ff9" [dd06899e-3d54-4b71-bda6-f8c6d06ce100] Running
	I0729 18:33:02.580704   77394 system_pods.go:61] "kube-scheduler-no-preload-888056" [a1b60226-df5e-45ce-8382-a8d277278129] Running
	I0729 18:33:02.580710   77394 system_pods.go:61] "metrics-server-78fcd8795b-9qqmj" [45bbbaf3-cf3e-4db1-9eec-693425bc5dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:33:02.580714   77394 system_pods.go:61] "storage-provisioner" [0aacb67c-abea-47fb-a2f1-f1245e68599a] Running
	I0729 18:33:02.580721   77394 system_pods.go:74] duration metric: took 175.857868ms to wait for pod list to return data ...
	I0729 18:33:02.580728   77394 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:33:02.778962   77394 default_sa.go:45] found service account: "default"
	I0729 18:33:02.778987   77394 default_sa.go:55] duration metric: took 198.250326ms for default service account to be created ...
	I0729 18:33:02.778995   77394 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:33:02.981123   77394 system_pods.go:86] 9 kube-system pods found
	I0729 18:33:02.981159   77394 system_pods.go:89] "coredns-5cfdc65f69-bbh6c" [66b43af3-78eb-437f-81d7-eedb4cc34349] Running
	I0729 18:33:02.981166   77394 system_pods.go:89] "coredns-5cfdc65f69-j9ddw" [679f8750-86aa-4e00-8291-6996b54b1930] Running
	I0729 18:33:02.981175   77394 system_pods.go:89] "etcd-no-preload-888056" [abcd648d-659a-4f02-a769-f2222eaac945] Running
	I0729 18:33:02.981181   77394 system_pods.go:89] "kube-apiserver-no-preload-888056" [99a48803-06b1-44a6-a0cc-f28f2ba7235f] Running
	I0729 18:33:02.981186   77394 system_pods.go:89] "kube-controller-manager-no-preload-888056" [6bb3d64c-9fef-41ee-a68d-170fac01dec5] Running
	I0729 18:33:02.981190   77394 system_pods.go:89] "kube-proxy-94ff9" [dd06899e-3d54-4b71-bda6-f8c6d06ce100] Running
	I0729 18:33:02.981196   77394 system_pods.go:89] "kube-scheduler-no-preload-888056" [a1b60226-df5e-45ce-8382-a8d277278129] Running
	I0729 18:33:02.981206   77394 system_pods.go:89] "metrics-server-78fcd8795b-9qqmj" [45bbbaf3-cf3e-4db1-9eec-693425bc5dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:33:02.981214   77394 system_pods.go:89] "storage-provisioner" [0aacb67c-abea-47fb-a2f1-f1245e68599a] Running
	I0729 18:33:02.981228   77394 system_pods.go:126] duration metric: took 202.226569ms to wait for k8s-apps to be running ...
	I0729 18:33:02.981239   77394 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:33:02.981290   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:33:02.999134   77394 system_svc.go:56] duration metric: took 17.878004ms WaitForService to wait for kubelet
	I0729 18:33:02.999169   77394 kubeadm.go:582] duration metric: took 9.681562891s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:33:02.999187   77394 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:33:03.179246   77394 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:33:03.179274   77394 node_conditions.go:123] node cpu capacity is 2
	I0729 18:33:03.179286   77394 node_conditions.go:105] duration metric: took 180.093491ms to run NodePressure ...
	I0729 18:33:03.179312   77394 start.go:241] waiting for startup goroutines ...
	I0729 18:33:03.179322   77394 start.go:246] waiting for cluster config update ...
	I0729 18:33:03.179344   77394 start.go:255] writing updated cluster config ...
	I0729 18:33:03.179658   77394 ssh_runner.go:195] Run: rm -f paused
	I0729 18:33:03.228664   77394 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 18:33:03.230706   77394 out.go:177] * Done! kubectl is now configured to use "no-preload-888056" cluster and "default" namespace by default
	I0729 18:33:28.269122   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:33:28.269375   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:33:28.269399   78080 kubeadm.go:310] 
	I0729 18:33:28.269433   78080 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:33:28.269471   78080 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:33:28.269480   78080 kubeadm.go:310] 
	I0729 18:33:28.269508   78080 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:33:28.269541   78080 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:33:28.269686   78080 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:33:28.269698   78080 kubeadm.go:310] 
	I0729 18:33:28.269846   78080 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:33:28.269902   78080 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:33:28.269946   78080 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:33:28.269969   78080 kubeadm.go:310] 
	I0729 18:33:28.270132   78080 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:33:28.270246   78080 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:33:28.270258   78080 kubeadm.go:310] 
	I0729 18:33:28.270434   78080 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:33:28.270567   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:33:28.270674   78080 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:33:28.270774   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:33:28.270784   78080 kubeadm.go:310] 
	I0729 18:33:28.271347   78080 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:33:28.271428   78080 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:33:28.271503   78080 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 18:33:28.271650   78080 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 18:33:28.271713   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:33:28.743675   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:33:28.759228   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:33:28.768522   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:33:28.768546   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:33:28.768593   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:33:28.777423   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:33:28.777481   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:33:28.786450   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:33:28.795335   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:33:28.795386   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:33:28.804519   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:33:28.813137   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:33:28.813193   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:33:28.822053   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:33:28.830463   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:33:28.830513   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:33:28.839818   78080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:33:29.066010   78080 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:35:25.197434   78080 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:35:25.197566   78080 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 18:35:25.199476   78080 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:35:25.199554   78080 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:35:25.199667   78080 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:35:25.199800   78080 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:35:25.199937   78080 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:35:25.200054   78080 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:35:25.201801   78080 out.go:204]   - Generating certificates and keys ...
	I0729 18:35:25.201875   78080 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:35:25.201944   78080 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:35:25.202073   78080 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:35:25.202136   78080 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:35:25.202231   78080 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:35:25.202287   78080 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:35:25.202339   78080 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:35:25.202426   78080 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:35:25.202492   78080 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:35:25.202560   78080 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:35:25.202603   78080 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:35:25.202692   78080 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:35:25.202779   78080 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:35:25.202863   78080 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:35:25.202962   78080 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:35:25.203070   78080 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:35:25.203213   78080 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:35:25.203289   78080 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:35:25.203323   78080 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:35:25.203381   78080 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:35:25.204837   78080 out.go:204]   - Booting up control plane ...
	I0729 18:35:25.204920   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:35:25.204985   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:35:25.205053   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:35:25.205146   78080 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:35:25.205274   78080 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:35:25.205316   78080 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:35:25.205379   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.205591   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.205658   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.205828   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.205926   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206142   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206204   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206411   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206488   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206683   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206698   78080 kubeadm.go:310] 
	I0729 18:35:25.206755   78080 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:35:25.206817   78080 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:35:25.206827   78080 kubeadm.go:310] 
	I0729 18:35:25.206860   78080 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:35:25.206890   78080 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:35:25.206975   78080 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:35:25.206985   78080 kubeadm.go:310] 
	I0729 18:35:25.207099   78080 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:35:25.207134   78080 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:35:25.207167   78080 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:35:25.207177   78080 kubeadm.go:310] 
	I0729 18:35:25.207289   78080 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:35:25.207403   78080 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:35:25.207412   78080 kubeadm.go:310] 
	I0729 18:35:25.207532   78080 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:35:25.207640   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:35:25.207754   78080 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:35:25.207821   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:35:25.207854   78080 kubeadm.go:310] 
	I0729 18:35:25.207886   78080 kubeadm.go:394] duration metric: took 7m57.080498205s to StartCluster
	I0729 18:35:25.207923   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:35:25.207983   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:35:25.251803   78080 cri.go:89] found id: ""
	I0729 18:35:25.251841   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.251852   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:35:25.251859   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:35:25.251920   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:35:25.287842   78080 cri.go:89] found id: ""
	I0729 18:35:25.287877   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.287895   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:35:25.287903   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:35:25.287967   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:35:25.324546   78080 cri.go:89] found id: ""
	I0729 18:35:25.324573   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.324582   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:35:25.324588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:35:25.324634   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:35:25.375723   78080 cri.go:89] found id: ""
	I0729 18:35:25.375746   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.375753   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:35:25.375759   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:35:25.375812   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:35:25.412580   78080 cri.go:89] found id: ""
	I0729 18:35:25.412604   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.412612   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:35:25.412617   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:35:25.412664   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:35:25.449360   78080 cri.go:89] found id: ""
	I0729 18:35:25.449397   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.449406   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:35:25.449413   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:35:25.449464   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:35:25.485655   78080 cri.go:89] found id: ""
	I0729 18:35:25.485687   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.485698   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:35:25.485705   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:35:25.485769   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:35:25.521752   78080 cri.go:89] found id: ""
	I0729 18:35:25.521776   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.521783   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:35:25.521792   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:35:25.521808   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:35:25.562894   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:35:25.562922   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:35:25.623879   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:35:25.623912   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:35:25.647315   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:35:25.647341   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:35:25.744827   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:35:25.744850   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:35:25.744865   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0729 18:35:25.849394   78080 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 18:35:25.849445   78080 out.go:239] * 
	W0729 18:35:25.849520   78080 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:35:25.849558   78080 out.go:239] * 
	W0729 18:35:25.850438   78080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:35:25.853770   78080 out.go:177] 
	W0729 18:35:25.854982   78080 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:35:25.855035   78080 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 18:35:25.855060   78080 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 18:35:25.856444   78080 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.234300897Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278525234275906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9786f94-6bd4-4790-8527-c65e252b68b1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.234735794Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f25c6b9-9207-4caa-8fac-c945940e13fd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.234816253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f25c6b9-9207-4caa-8fac-c945940e13fd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.235121594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:779e9739bfde18464512468df0e87f48c1c401d4ce273a6095af79033ffe2656,PodSandboxId:c92eac849c05a276450f5ed21c16280f037924bd6f261fc4fac83527ad034d67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277975185304948,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aacb67c-abea-47fb-a2f1-f1245e68599a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85701264cf72fe0d32d7f7107aafb2d5901645a6cafbbdef791511be37ccae55,PodSandboxId:afe9dc082f2fd7f1cbf73a448ac50816520049fb66834326f61b115559907037,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277974206890914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-j9ddw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679f8750-86aa-4e00-8291-6996b54b1930,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6857c552e13c1c01ae4c5d44e049fa2118b38a61c4aec37092311630f54fc67,PodSandboxId:005b4fcdc8b0cb00146953eadff1af1ab3bd974ac03e464760ebaaae6e094e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277974043899160,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bbh6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66
b43af3-78eb-437f-81d7-eedb4cc34349,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1774d6fcb55cec17aa29d4f0706d63871f6c0b47f54375c40db87b04b70742,PodSandboxId:4c608bb1fab59208b201b2829ef27301d6a92b4c385822079ead11d2d1f59c93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722277973250368443,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94ff9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd06899e-3d54-4b71-bda6-f8c6d06ce100,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d436b0a14a79af77c8f0c8cfe3de4fd0a11bdd340381691ffe45ce54fbe56f1,PodSandboxId:347441c86a6508adfde0f708f5cd0b9894be414137b73f1018e84e28c1bb8e38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722277962604229613,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c4a8e42fd4c8af4ba53f7fe0baa3b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a585ae36a26fed874f5cc3160be36d9a1efe57aaffbcb6d9be93da7f450b4c1,PodSandboxId:b01d9cf9e0e5b22f09c784ffd72f3bb05813a57653ac5ed726763865106b58a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722277962613018790,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fb17f47342c625216d5a613149e748,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e06f4bdecbf2629adb6db26a31717832f3ec841760329334040be495323ba8,PodSandboxId:70713d0f76b9c9553978014fcb78c07125601ddf5932e9f9956b56b3c1a7b13f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722277962580185978,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbe9d632c1637a08ae56bd9899dd403,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5f8d9c79b25a7fb9ee00be8049c0d1e607e78f6bc95d4340b6a3ffbfcf1dd3,PodSandboxId:1849372e07553a89feb0ce99c56ca232346c1d20d288c1d165910237adb69abc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722277962549420977,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f538f33d1fcf149f95291a1ac2f3fb29,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8521c4072867629f4deee425dc4bca36a97b2903ae0134a72ff6192cfc236dee,PodSandboxId:bcc0dc1963755939bacfbc748220969a0405ee97cf6e49e81b4247851fe33ea4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722277678156067131,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbe9d632c1637a08ae56bd9899dd403,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f25c6b9-9207-4caa-8fac-c945940e13fd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.275909896Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3348154f-db22-45f5-9c65-e15932e503a4 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.276060190Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3348154f-db22-45f5-9c65-e15932e503a4 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.277219735Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=41e1ee3d-a59b-4bdd-8979-68f327da8347 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.277551092Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278525277528618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41e1ee3d-a59b-4bdd-8979-68f327da8347 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.278291325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=665cf58a-2fe8-4e70-9aa2-7f7a522b2048 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.278345033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=665cf58a-2fe8-4e70-9aa2-7f7a522b2048 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.278574597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:779e9739bfde18464512468df0e87f48c1c401d4ce273a6095af79033ffe2656,PodSandboxId:c92eac849c05a276450f5ed21c16280f037924bd6f261fc4fac83527ad034d67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277975185304948,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aacb67c-abea-47fb-a2f1-f1245e68599a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85701264cf72fe0d32d7f7107aafb2d5901645a6cafbbdef791511be37ccae55,PodSandboxId:afe9dc082f2fd7f1cbf73a448ac50816520049fb66834326f61b115559907037,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277974206890914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-j9ddw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679f8750-86aa-4e00-8291-6996b54b1930,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6857c552e13c1c01ae4c5d44e049fa2118b38a61c4aec37092311630f54fc67,PodSandboxId:005b4fcdc8b0cb00146953eadff1af1ab3bd974ac03e464760ebaaae6e094e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277974043899160,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bbh6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66
b43af3-78eb-437f-81d7-eedb4cc34349,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1774d6fcb55cec17aa29d4f0706d63871f6c0b47f54375c40db87b04b70742,PodSandboxId:4c608bb1fab59208b201b2829ef27301d6a92b4c385822079ead11d2d1f59c93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722277973250368443,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94ff9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd06899e-3d54-4b71-bda6-f8c6d06ce100,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d436b0a14a79af77c8f0c8cfe3de4fd0a11bdd340381691ffe45ce54fbe56f1,PodSandboxId:347441c86a6508adfde0f708f5cd0b9894be414137b73f1018e84e28c1bb8e38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722277962604229613,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c4a8e42fd4c8af4ba53f7fe0baa3b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a585ae36a26fed874f5cc3160be36d9a1efe57aaffbcb6d9be93da7f450b4c1,PodSandboxId:b01d9cf9e0e5b22f09c784ffd72f3bb05813a57653ac5ed726763865106b58a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722277962613018790,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fb17f47342c625216d5a613149e748,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e06f4bdecbf2629adb6db26a31717832f3ec841760329334040be495323ba8,PodSandboxId:70713d0f76b9c9553978014fcb78c07125601ddf5932e9f9956b56b3c1a7b13f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722277962580185978,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbe9d632c1637a08ae56bd9899dd403,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5f8d9c79b25a7fb9ee00be8049c0d1e607e78f6bc95d4340b6a3ffbfcf1dd3,PodSandboxId:1849372e07553a89feb0ce99c56ca232346c1d20d288c1d165910237adb69abc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722277962549420977,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f538f33d1fcf149f95291a1ac2f3fb29,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8521c4072867629f4deee425dc4bca36a97b2903ae0134a72ff6192cfc236dee,PodSandboxId:bcc0dc1963755939bacfbc748220969a0405ee97cf6e49e81b4247851fe33ea4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722277678156067131,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbe9d632c1637a08ae56bd9899dd403,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=665cf58a-2fe8-4e70-9aa2-7f7a522b2048 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.318252289Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d3e6aeae-3681-43f6-96c5-05ee241f3c88 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.318325112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d3e6aeae-3681-43f6-96c5-05ee241f3c88 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.319397366Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c352d4b4-dee6-4b04-9a9d-fe045d34efb6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.319731836Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278525319712123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c352d4b4-dee6-4b04-9a9d-fe045d34efb6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.320535277Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ff84676-23c6-4ca2-afae-6bc5f2a14176 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.320621262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ff84676-23c6-4ca2-afae-6bc5f2a14176 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.320825641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:779e9739bfde18464512468df0e87f48c1c401d4ce273a6095af79033ffe2656,PodSandboxId:c92eac849c05a276450f5ed21c16280f037924bd6f261fc4fac83527ad034d67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277975185304948,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aacb67c-abea-47fb-a2f1-f1245e68599a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85701264cf72fe0d32d7f7107aafb2d5901645a6cafbbdef791511be37ccae55,PodSandboxId:afe9dc082f2fd7f1cbf73a448ac50816520049fb66834326f61b115559907037,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277974206890914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-j9ddw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679f8750-86aa-4e00-8291-6996b54b1930,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6857c552e13c1c01ae4c5d44e049fa2118b38a61c4aec37092311630f54fc67,PodSandboxId:005b4fcdc8b0cb00146953eadff1af1ab3bd974ac03e464760ebaaae6e094e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277974043899160,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bbh6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66
b43af3-78eb-437f-81d7-eedb4cc34349,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1774d6fcb55cec17aa29d4f0706d63871f6c0b47f54375c40db87b04b70742,PodSandboxId:4c608bb1fab59208b201b2829ef27301d6a92b4c385822079ead11d2d1f59c93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722277973250368443,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94ff9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd06899e-3d54-4b71-bda6-f8c6d06ce100,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d436b0a14a79af77c8f0c8cfe3de4fd0a11bdd340381691ffe45ce54fbe56f1,PodSandboxId:347441c86a6508adfde0f708f5cd0b9894be414137b73f1018e84e28c1bb8e38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722277962604229613,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c4a8e42fd4c8af4ba53f7fe0baa3b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a585ae36a26fed874f5cc3160be36d9a1efe57aaffbcb6d9be93da7f450b4c1,PodSandboxId:b01d9cf9e0e5b22f09c784ffd72f3bb05813a57653ac5ed726763865106b58a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722277962613018790,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fb17f47342c625216d5a613149e748,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e06f4bdecbf2629adb6db26a31717832f3ec841760329334040be495323ba8,PodSandboxId:70713d0f76b9c9553978014fcb78c07125601ddf5932e9f9956b56b3c1a7b13f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722277962580185978,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbe9d632c1637a08ae56bd9899dd403,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5f8d9c79b25a7fb9ee00be8049c0d1e607e78f6bc95d4340b6a3ffbfcf1dd3,PodSandboxId:1849372e07553a89feb0ce99c56ca232346c1d20d288c1d165910237adb69abc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722277962549420977,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f538f33d1fcf149f95291a1ac2f3fb29,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8521c4072867629f4deee425dc4bca36a97b2903ae0134a72ff6192cfc236dee,PodSandboxId:bcc0dc1963755939bacfbc748220969a0405ee97cf6e49e81b4247851fe33ea4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722277678156067131,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbe9d632c1637a08ae56bd9899dd403,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ff84676-23c6-4ca2-afae-6bc5f2a14176 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.360224996Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ca530fd-d775-43a2-89c9-3119a2a8e042 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.360329060Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ca530fd-d775-43a2-89c9-3119a2a8e042 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.361613313Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42beffdf-4587-44fd-b133-0c3215eda8d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.361987390Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278525361919493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42beffdf-4587-44fd-b133-0c3215eda8d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.362424483Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8186c41e-cdbd-4547-9e64-7ed2ec1edb46 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.362472661Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8186c41e-cdbd-4547-9e64-7ed2ec1edb46 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:42:05 no-preload-888056 crio[736]: time="2024-07-29 18:42:05.362669687Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:779e9739bfde18464512468df0e87f48c1c401d4ce273a6095af79033ffe2656,PodSandboxId:c92eac849c05a276450f5ed21c16280f037924bd6f261fc4fac83527ad034d67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277975185304948,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aacb67c-abea-47fb-a2f1-f1245e68599a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85701264cf72fe0d32d7f7107aafb2d5901645a6cafbbdef791511be37ccae55,PodSandboxId:afe9dc082f2fd7f1cbf73a448ac50816520049fb66834326f61b115559907037,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277974206890914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-j9ddw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679f8750-86aa-4e00-8291-6996b54b1930,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6857c552e13c1c01ae4c5d44e049fa2118b38a61c4aec37092311630f54fc67,PodSandboxId:005b4fcdc8b0cb00146953eadff1af1ab3bd974ac03e464760ebaaae6e094e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277974043899160,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bbh6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66
b43af3-78eb-437f-81d7-eedb4cc34349,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1774d6fcb55cec17aa29d4f0706d63871f6c0b47f54375c40db87b04b70742,PodSandboxId:4c608bb1fab59208b201b2829ef27301d6a92b4c385822079ead11d2d1f59c93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722277973250368443,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94ff9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd06899e-3d54-4b71-bda6-f8c6d06ce100,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d436b0a14a79af77c8f0c8cfe3de4fd0a11bdd340381691ffe45ce54fbe56f1,PodSandboxId:347441c86a6508adfde0f708f5cd0b9894be414137b73f1018e84e28c1bb8e38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722277962604229613,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c4a8e42fd4c8af4ba53f7fe0baa3b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a585ae36a26fed874f5cc3160be36d9a1efe57aaffbcb6d9be93da7f450b4c1,PodSandboxId:b01d9cf9e0e5b22f09c784ffd72f3bb05813a57653ac5ed726763865106b58a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722277962613018790,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fb17f47342c625216d5a613149e748,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e06f4bdecbf2629adb6db26a31717832f3ec841760329334040be495323ba8,PodSandboxId:70713d0f76b9c9553978014fcb78c07125601ddf5932e9f9956b56b3c1a7b13f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722277962580185978,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbe9d632c1637a08ae56bd9899dd403,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5f8d9c79b25a7fb9ee00be8049c0d1e607e78f6bc95d4340b6a3ffbfcf1dd3,PodSandboxId:1849372e07553a89feb0ce99c56ca232346c1d20d288c1d165910237adb69abc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722277962549420977,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f538f33d1fcf149f95291a1ac2f3fb29,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8521c4072867629f4deee425dc4bca36a97b2903ae0134a72ff6192cfc236dee,PodSandboxId:bcc0dc1963755939bacfbc748220969a0405ee97cf6e49e81b4247851fe33ea4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722277678156067131,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbe9d632c1637a08ae56bd9899dd403,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8186c41e-cdbd-4547-9e64-7ed2ec1edb46 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	779e9739bfde1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   c92eac849c05a       storage-provisioner
	85701264cf72f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   afe9dc082f2fd       coredns-5cfdc65f69-j9ddw
	f6857c552e13c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   005b4fcdc8b0c       coredns-5cfdc65f69-bbh6c
	2b1774d6fcb55       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   9 minutes ago       Running             kube-proxy                0                   4c608bb1fab59       kube-proxy-94ff9
	2a585ae36a26f       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   9 minutes ago       Running             kube-controller-manager   2                   b01d9cf9e0e5b       kube-controller-manager-no-preload-888056
	7d436b0a14a79       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   9 minutes ago       Running             kube-scheduler            2                   347441c86a650       kube-scheduler-no-preload-888056
	f2e06f4bdecbf       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   9 minutes ago       Running             kube-apiserver            2                   70713d0f76b9c       kube-apiserver-no-preload-888056
	5c5f8d9c79b25       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   9 minutes ago       Running             etcd                      2                   1849372e07553       etcd-no-preload-888056
	8521c40728676       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Exited              kube-apiserver            1                   bcc0dc1963755       kube-apiserver-no-preload-888056
	
	
	==> coredns [85701264cf72fe0d32d7f7107aafb2d5901645a6cafbbdef791511be37ccae55] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f6857c552e13c1c01ae4c5d44e049fa2118b38a61c4aec37092311630f54fc67] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-888056
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-888056
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=no-preload-888056
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T18_32_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:32:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-888056
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:41:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:38:05 +0000   Mon, 29 Jul 2024 18:32:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:38:05 +0000   Mon, 29 Jul 2024 18:32:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:38:05 +0000   Mon, 29 Jul 2024 18:32:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:38:05 +0000   Mon, 29 Jul 2024 18:32:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.80
	  Hostname:    no-preload-888056
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 758826d64bb7478da0705a25a2608906
	  System UUID:                758826d6-4bb7-478d-a070-5a25a2608906
	  Boot ID:                    875ba7f7-9aaa-4f23-90f2-2198eefaec6c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-bbh6c                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m13s
	  kube-system                 coredns-5cfdc65f69-j9ddw                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m13s
	  kube-system                 etcd-no-preload-888056                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-no-preload-888056             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-no-preload-888056    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-94ff9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m13s
	  kube-system                 kube-scheduler-no-preload-888056             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-78fcd8795b-9qqmj              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m11s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m11s                  kube-proxy       
	  Normal  Starting                 9m24s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m23s (x8 over 9m24s)  kubelet          Node no-preload-888056 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m23s (x8 over 9m24s)  kubelet          Node no-preload-888056 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m23s (x7 over 9m24s)  kubelet          Node no-preload-888056 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node no-preload-888056 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node no-preload-888056 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node no-preload-888056 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s                  node-controller  Node no-preload-888056 event: Registered Node no-preload-888056 in Controller
	
	
	==> dmesg <==
	[  +0.057652] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049776] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.258244] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.662198] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.603745] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.924650] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.061107] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061594] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.191247] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.149984] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[  +0.287943] systemd-fstab-generator[721]: Ignoring "noauto" option for root device
	[ +14.909600] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
	[  +0.064478] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.072496] systemd-fstab-generator[1310]: Ignoring "noauto" option for root device
	[Jul29 18:28] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.815521] kauditd_printk_skb: 93 callbacks suppressed
	[Jul29 18:32] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.556957] systemd-fstab-generator[2975]: Ignoring "noauto" option for root device
	[  +4.624635] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.467554] systemd-fstab-generator[3295]: Ignoring "noauto" option for root device
	[  +5.735400] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.209870] systemd-fstab-generator[3493]: Ignoring "noauto" option for root device
	[  +7.070286] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [5c5f8d9c79b25a7fb9ee00be8049c0d1e607e78f6bc95d4340b6a3ffbfcf1dd3] <==
	{"level":"info","ts":"2024-07-29T18:32:42.958156Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:32:42.960309Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"83bc08ad82c569f4","initial-advertise-peer-urls":["https://192.168.72.80:2380"],"listen-peer-urls":["https://192.168.72.80:2380"],"advertise-client-urls":["https://192.168.72.80:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.80:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T18:32:42.960327Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T18:32:42.963002Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.72.80:2380"}
	{"level":"info","ts":"2024-07-29T18:32:42.963103Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:32:43.523547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83bc08ad82c569f4 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T18:32:43.523658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83bc08ad82c569f4 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T18:32:43.524151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83bc08ad82c569f4 received MsgPreVoteResp from 83bc08ad82c569f4 at term 1"}
	{"level":"info","ts":"2024-07-29T18:32:43.52421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83bc08ad82c569f4 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T18:32:43.524237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83bc08ad82c569f4 received MsgVoteResp from 83bc08ad82c569f4 at term 2"}
	{"level":"info","ts":"2024-07-29T18:32:43.524263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83bc08ad82c569f4 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T18:32:43.524289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 83bc08ad82c569f4 elected leader 83bc08ad82c569f4 at term 2"}
	{"level":"info","ts":"2024-07-29T18:32:43.529038Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:32:43.533132Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"83bc08ad82c569f4","local-member-attributes":"{Name:no-preload-888056 ClientURLs:[https://192.168.72.80:2379]}","request-path":"/0/members/83bc08ad82c569f4/attributes","cluster-id":"96c4a3ba39e20af4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:32:43.533257Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:32:43.533742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:32:43.533873Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"96c4a3ba39e20af4","local-member-id":"83bc08ad82c569f4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:32:43.534012Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:32:43.534034Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:32:43.536406Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T18:32:43.537197Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.80:2379"}
	{"level":"info","ts":"2024-07-29T18:32:43.539671Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T18:32:43.542748Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T18:32:43.551987Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T18:32:43.552052Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:42:05 up 14 min,  0 users,  load average: 0.08, 0.22, 0.21
	Linux no-preload-888056 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8521c4072867629f4deee425dc4bca36a97b2903ae0134a72ff6192cfc236dee] <==
	W0729 18:32:38.258109       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.279825       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.295357       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.307470       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.330044       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.333238       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.354056       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.429463       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.429748       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.456393       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.544426       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.643830       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.673272       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.700717       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.720511       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.737346       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.743775       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.861367       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.878228       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.909339       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:39.024669       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:39.054100       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:39.154373       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:39.161848       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:39.246476       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f2e06f4bdecbf2629adb6db26a31717832f3ec841760329334040be495323ba8] <==
	W0729 18:37:46.102741       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 18:37:46.102828       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 18:37:46.104044       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 18:37:46.104118       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:38:46.104488       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 18:38:46.104568       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0729 18:38:46.104769       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 18:38:46.104862       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 18:38:46.106051       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 18:38:46.106102       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:40:46.107104       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 18:40:46.107454       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0729 18:40:46.107115       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 18:40:46.107562       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0729 18:40:46.108728       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 18:40:46.108761       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2a585ae36a26fed874f5cc3160be36d9a1efe57aaffbcb6d9be93da7f450b4c1] <==
	E0729 18:36:53.020063       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:36:53.174403       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:37:23.026718       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:37:23.183496       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:37:53.033707       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:37:53.192106       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 18:38:05.236536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-888056"
	E0729 18:38:23.040627       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:38:23.200898       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:38:53.047691       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:38:53.211327       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 18:38:57.790125       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="307.889µs"
	I0729 18:39:08.785778       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="236.727µs"
	E0729 18:39:23.054710       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:39:23.221333       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:39:53.061602       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:39:53.229209       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:40:23.068518       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:40:23.238249       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:40:53.075876       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:40:53.247067       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:41:23.082803       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:41:23.255209       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:41:53.090410       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:41:53.266348       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [2b1774d6fcb55cec17aa29d4f0706d63871f6c0b47f54375c40db87b04b70742] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 18:32:53.698126       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 18:32:53.715699       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.72.80"]
	E0729 18:32:53.715777       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 18:32:53.834449       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 18:32:53.834623       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:32:53.834657       1 server_linux.go:170] "Using iptables Proxier"
	I0729 18:32:53.848671       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 18:32:53.849062       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 18:32:53.849092       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:32:53.850788       1 config.go:197] "Starting service config controller"
	I0729 18:32:53.850818       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:32:53.850841       1 config.go:104] "Starting endpoint slice config controller"
	I0729 18:32:53.850845       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:32:53.851486       1 config.go:326] "Starting node config controller"
	I0729 18:32:53.851493       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:32:53.952827       1 shared_informer.go:320] Caches are synced for node config
	I0729 18:32:53.952843       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:32:53.952854       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7d436b0a14a79af77c8f0c8cfe3de4fd0a11bdd340381691ffe45ce54fbe56f1] <==
	W0729 18:32:45.199647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 18:32:45.199666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.057714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 18:32:46.057766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.108859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 18:32:46.108911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.111008       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 18:32:46.111058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.132568       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 18:32:46.132619       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0729 18:32:46.180459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 18:32:46.181313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.212191       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 18:32:46.212306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.269831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 18:32:46.270065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.318062       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 18:32:46.318129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.360185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 18:32:46.360314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.361083       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 18:32:46.361124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.432112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 18:32:46.432166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0729 18:32:47.876889       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 18:39:47 no-preload-888056 kubelet[3302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:39:47 no-preload-888056 kubelet[3302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:39:47 no-preload-888056 kubelet[3302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:39:47 no-preload-888056 kubelet[3302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:39:49 no-preload-888056 kubelet[3302]: E0729 18:39:49.772438    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:40:03 no-preload-888056 kubelet[3302]: E0729 18:40:03.771331    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:40:17 no-preload-888056 kubelet[3302]: E0729 18:40:17.771734    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:40:28 no-preload-888056 kubelet[3302]: E0729 18:40:28.771024    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:40:41 no-preload-888056 kubelet[3302]: E0729 18:40:41.773185    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:40:47 no-preload-888056 kubelet[3302]: E0729 18:40:47.832131    3302 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:40:47 no-preload-888056 kubelet[3302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:40:47 no-preload-888056 kubelet[3302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:40:47 no-preload-888056 kubelet[3302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:40:47 no-preload-888056 kubelet[3302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:40:56 no-preload-888056 kubelet[3302]: E0729 18:40:56.771406    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:41:09 no-preload-888056 kubelet[3302]: E0729 18:41:09.771187    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:41:20 no-preload-888056 kubelet[3302]: E0729 18:41:20.769737    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:41:33 no-preload-888056 kubelet[3302]: E0729 18:41:33.770293    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:41:44 no-preload-888056 kubelet[3302]: E0729 18:41:44.771354    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:41:47 no-preload-888056 kubelet[3302]: E0729 18:41:47.833180    3302 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:41:47 no-preload-888056 kubelet[3302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:41:47 no-preload-888056 kubelet[3302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:41:47 no-preload-888056 kubelet[3302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:41:47 no-preload-888056 kubelet[3302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:41:55 no-preload-888056 kubelet[3302]: E0729 18:41:55.772873    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	
	
	==> storage-provisioner [779e9739bfde18464512468df0e87f48c1c401d4ce273a6095af79033ffe2656] <==
	I0729 18:32:55.292633       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 18:32:55.306882       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 18:32:55.307123       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 18:32:55.328313       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 18:32:55.328513       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-888056_c718b4ee-9753-4248-ac27-5b5bd211a006!
	I0729 18:32:55.330012       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"10aa09d0-90dc-4467-a48d-93fa86f2b19b", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-888056_c718b4ee-9753-4248-ac27-5b5bd211a006 became leader
	I0729 18:32:55.429556       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-888056_c718b4ee-9753-4248-ac27-5b5bd211a006!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-888056 -n no-preload-888056
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-888056 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-9qqmj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-888056 describe pod metrics-server-78fcd8795b-9qqmj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-888056 describe pod metrics-server-78fcd8795b-9qqmj: exit status 1 (63.439454ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-9qqmj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-888056 describe pod metrics-server-78fcd8795b-9qqmj: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:35:38.573864   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:35:57.219377   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:36:13.356237   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:36:52.902599   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:36:54.099015   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:36:54.288164   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:37:20.265792   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:37:50.517570   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:38:17.145716   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:38:17.333779   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:38:29.677145   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:38:34.527196   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:39:13.562230   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:39:15.528700   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:39:50.312458   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:39:55.952215   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:40:57.219612   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:41:52.902563   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:41:54.099434   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:41:54.288311   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:42:50.517516   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/bridge-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:43:29.676827   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:43:34.528040   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:44:15.528303   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386663 -n old-k8s-version-386663
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386663 -n old-k8s-version-386663: exit status 2 (224.858515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-386663" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386663 -n old-k8s-version-386663
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386663 -n old-k8s-version-386663: exit status 2 (217.129417ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-386663 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-386663 logs -n 25: (1.52287349s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-729010 sudo cat                              | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo find                             | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo crio                             | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-729010                                       | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	| delete  | -p                                                     | disable-driver-mounts-603863 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | disable-driver-mounts-603863                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:19 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-888056             | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-888056                                   | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-409322            | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-502055  | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC |                     |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-386663        | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-888056                  | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-888056 --memory=2200                     | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:33 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-409322                 | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-502055       | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:31 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386663             | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:22:47
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:22:47.218965   78080 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:22:47.219209   78080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:47.219217   78080 out.go:304] Setting ErrFile to fd 2...
	I0729 18:22:47.219222   78080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:47.219370   78080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:22:47.219863   78080 out.go:298] Setting JSON to false
	I0729 18:22:47.220726   78080 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7519,"bootTime":1722269848,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:22:47.220777   78080 start.go:139] virtualization: kvm guest
	I0729 18:22:47.222804   78080 out.go:177] * [old-k8s-version-386663] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:22:47.224119   78080 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 18:22:47.224173   78080 notify.go:220] Checking for updates...
	I0729 18:22:47.226449   78080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:22:47.227676   78080 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:22:47.228809   78080 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:22:47.229914   78080 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:22:47.230906   78080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:22:47.232363   78080 config.go:182] Loaded profile config "old-k8s-version-386663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:22:47.232750   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:47.232814   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:47.247542   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44723
	I0729 18:22:47.247909   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:47.248418   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:22:47.248436   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:47.248786   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:47.248965   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:22:47.250635   78080 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 18:22:47.251760   78080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:22:47.252055   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:47.252098   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:47.266291   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35843
	I0729 18:22:47.266672   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:47.267136   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:22:47.267157   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:47.267492   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:47.267662   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:22:47.303335   78080 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 18:22:47.304503   78080 start.go:297] selected driver: kvm2
	I0729 18:22:47.304513   78080 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:47.304607   78080 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:22:47.305291   78080 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:47.305360   78080 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:22:47.319918   78080 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:22:47.320315   78080 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:22:47.320341   78080 cni.go:84] Creating CNI manager for ""
	I0729 18:22:47.320349   78080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:22:47.320386   78080 start.go:340] cluster config:
	{Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:47.320480   78080 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:47.322357   78080 out.go:177] * Starting "old-k8s-version-386663" primary control-plane node in "old-k8s-version-386663" cluster
	I0729 18:22:43.378634   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:46.450644   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:47.323622   78080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:22:47.323653   78080 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:22:47.323660   78080 cache.go:56] Caching tarball of preloaded images
	I0729 18:22:47.323740   78080 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:22:47.323761   78080 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 18:22:47.323849   78080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json ...
	I0729 18:22:47.324021   78080 start.go:360] acquireMachinesLock for old-k8s-version-386663: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:22:52.530551   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:55.602731   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:01.682636   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:04.754621   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:10.834616   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:13.906688   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:19.986655   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:23.059064   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:29.138659   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:32.210758   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:38.290665   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:41.362732   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:47.442637   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:50.514656   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:56.594611   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:59.666706   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:05.746649   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:08.818685   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:14.898642   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:17.970619   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:24.050664   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:27.122664   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:33.202629   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:36.274678   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:42.354674   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:45.426704   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:51.506670   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:54.578602   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:00.658683   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:03.730663   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:09.810619   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:12.882598   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:18.962612   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:22.034673   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:28.114638   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:31.186598   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:37.266642   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:40.338599   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:46.418679   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:49.490705   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:55.570690   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:58.642719   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:04.722643   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:07.794711   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:13.874638   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:16.946806   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:19.951345   77627 start.go:364] duration metric: took 4m10.060086709s to acquireMachinesLock for "embed-certs-409322"
	I0729 18:26:19.951406   77627 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:19.951414   77627 fix.go:54] fixHost starting: 
	I0729 18:26:19.951732   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:19.951761   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:19.967602   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41827
	I0729 18:26:19.968062   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:19.968486   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:26:19.968505   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:19.968809   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:19.969009   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:19.969135   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:26:19.970757   77627 fix.go:112] recreateIfNeeded on embed-certs-409322: state=Stopped err=<nil>
	I0729 18:26:19.970784   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	W0729 18:26:19.970931   77627 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:19.972631   77627 out.go:177] * Restarting existing kvm2 VM for "embed-certs-409322" ...
	I0729 18:26:19.948656   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:19.948718   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:26:19.949066   77394 buildroot.go:166] provisioning hostname "no-preload-888056"
	I0729 18:26:19.949096   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:26:19.949286   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:26:19.951194   77394 machine.go:97] duration metric: took 4m37.435248922s to provisionDockerMachine
	I0729 18:26:19.951238   77394 fix.go:56] duration metric: took 4m37.45552986s for fixHost
	I0729 18:26:19.951246   77394 start.go:83] releasing machines lock for "no-preload-888056", held for 4m37.455571504s
	W0729 18:26:19.951284   77394 start.go:714] error starting host: provision: host is not running
	W0729 18:26:19.951381   77394 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 18:26:19.951389   77394 start.go:729] Will try again in 5 seconds ...
	I0729 18:26:19.973786   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Start
	I0729 18:26:19.973923   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring networks are active...
	I0729 18:26:19.974594   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring network default is active
	I0729 18:26:19.974930   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring network mk-embed-certs-409322 is active
	I0729 18:26:19.975500   77627 main.go:141] libmachine: (embed-certs-409322) Getting domain xml...
	I0729 18:26:19.976135   77627 main.go:141] libmachine: (embed-certs-409322) Creating domain...
	I0729 18:26:21.186491   77627 main.go:141] libmachine: (embed-certs-409322) Waiting to get IP...
	I0729 18:26:21.187403   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.187857   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.187924   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.187843   78811 retry.go:31] will retry after 218.694883ms: waiting for machine to come up
	I0729 18:26:21.408404   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.408843   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.408872   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.408795   78811 retry.go:31] will retry after 335.138992ms: waiting for machine to come up
	I0729 18:26:21.745329   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.745805   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.745828   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.745759   78811 retry.go:31] will retry after 317.831297ms: waiting for machine to come up
	I0729 18:26:22.065446   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:22.065985   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:22.066024   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:22.065948   78811 retry.go:31] will retry after 557.945634ms: waiting for machine to come up
	I0729 18:26:22.625624   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:22.626020   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:22.626047   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:22.625967   78811 retry.go:31] will retry after 739.991425ms: waiting for machine to come up
	I0729 18:26:23.368166   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:23.368523   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:23.368549   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:23.368477   78811 retry.go:31] will retry after 878.16479ms: waiting for machine to come up
	I0729 18:26:24.248467   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:24.248871   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:24.248895   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:24.248813   78811 retry.go:31] will retry after 1.022542608s: waiting for machine to come up
	I0729 18:26:24.952911   77394 start.go:360] acquireMachinesLock for no-preload-888056: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:26:25.273470   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:25.273886   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:25.273913   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:25.273829   78811 retry.go:31] will retry after 1.313344307s: waiting for machine to come up
	I0729 18:26:26.589378   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:26.589805   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:26.589852   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:26.589769   78811 retry.go:31] will retry after 1.553795128s: waiting for machine to come up
	I0729 18:26:28.145271   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:28.145680   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:28.145704   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:28.145643   78811 retry.go:31] will retry after 1.859680601s: waiting for machine to come up
	I0729 18:26:30.007588   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:30.007988   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:30.008018   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:30.007937   78811 retry.go:31] will retry after 1.754805493s: waiting for machine to come up
	I0729 18:26:31.764527   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:31.765077   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:31.765107   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:31.765030   78811 retry.go:31] will retry after 2.769383357s: waiting for machine to come up
	I0729 18:26:34.536479   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:34.536972   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:34.537007   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:34.536921   78811 retry.go:31] will retry after 3.355218512s: waiting for machine to come up
	I0729 18:26:39.563371   77859 start.go:364] duration metric: took 3m59.712120998s to acquireMachinesLock for "default-k8s-diff-port-502055"
	I0729 18:26:39.563440   77859 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:39.563452   77859 fix.go:54] fixHost starting: 
	I0729 18:26:39.563871   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:39.563914   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:39.580545   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0729 18:26:39.580962   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:39.581492   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:26:39.581518   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:39.581864   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:39.582096   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:39.582290   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:26:39.583857   77859 fix.go:112] recreateIfNeeded on default-k8s-diff-port-502055: state=Stopped err=<nil>
	I0729 18:26:39.583883   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	W0729 18:26:39.584062   77859 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:39.586281   77859 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-502055" ...
	I0729 18:26:39.587651   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Start
	I0729 18:26:39.587814   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring networks are active...
	I0729 18:26:39.588499   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring network default is active
	I0729 18:26:39.588864   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring network mk-default-k8s-diff-port-502055 is active
	I0729 18:26:39.589616   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Getting domain xml...
	I0729 18:26:39.590433   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Creating domain...
	I0729 18:26:37.896070   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.896640   77627 main.go:141] libmachine: (embed-certs-409322) Found IP for machine: 192.168.39.58
	I0729 18:26:37.896664   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has current primary IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.896670   77627 main.go:141] libmachine: (embed-certs-409322) Reserving static IP address...
	I0729 18:26:37.897129   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "embed-certs-409322", mac: "52:54:00:22:9f:57", ip: "192.168.39.58"} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:37.897157   77627 main.go:141] libmachine: (embed-certs-409322) Reserved static IP address: 192.168.39.58
	I0729 18:26:37.897173   77627 main.go:141] libmachine: (embed-certs-409322) DBG | skip adding static IP to network mk-embed-certs-409322 - found existing host DHCP lease matching {name: "embed-certs-409322", mac: "52:54:00:22:9f:57", ip: "192.168.39.58"}
	I0729 18:26:37.897189   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Getting to WaitForSSH function...
	I0729 18:26:37.897206   77627 main.go:141] libmachine: (embed-certs-409322) Waiting for SSH to be available...
	I0729 18:26:37.899216   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.899595   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:37.899616   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.899785   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Using SSH client type: external
	I0729 18:26:37.899808   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa (-rw-------)
	I0729 18:26:37.899845   77627 main.go:141] libmachine: (embed-certs-409322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:26:37.899858   77627 main.go:141] libmachine: (embed-certs-409322) DBG | About to run SSH command:
	I0729 18:26:37.899872   77627 main.go:141] libmachine: (embed-certs-409322) DBG | exit 0
	I0729 18:26:38.026619   77627 main.go:141] libmachine: (embed-certs-409322) DBG | SSH cmd err, output: <nil>: 
	I0729 18:26:38.027028   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetConfigRaw
	I0729 18:26:38.027621   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:38.030532   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.030963   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.030989   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.031243   77627 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/config.json ...
	I0729 18:26:38.031413   77627 machine.go:94] provisionDockerMachine start ...
	I0729 18:26:38.031437   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:38.031642   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.033867   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.034218   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.034251   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.034380   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.034545   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.034682   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.034807   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.034992   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.035175   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.035185   77627 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:26:38.142565   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:26:38.142595   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.142842   77627 buildroot.go:166] provisioning hostname "embed-certs-409322"
	I0729 18:26:38.142872   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.143071   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.145625   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.145951   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.145974   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.146217   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.146423   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.146577   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.146730   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.146861   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.147046   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.147065   77627 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-409322 && echo "embed-certs-409322" | sudo tee /etc/hostname
	I0729 18:26:38.264341   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-409322
	
	I0729 18:26:38.264368   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.266846   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.267144   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.267171   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.267328   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.267488   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.267660   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.267757   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.267936   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.268106   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.268122   77627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-409322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-409322/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-409322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:26:38.383748   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:38.383779   77627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:26:38.383805   77627 buildroot.go:174] setting up certificates
	I0729 18:26:38.383817   77627 provision.go:84] configureAuth start
	I0729 18:26:38.383827   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.384110   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:38.386936   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.387320   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.387348   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.387508   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.389550   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.389871   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.389910   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.389978   77627 provision.go:143] copyHostCerts
	I0729 18:26:38.390039   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:26:38.390052   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:26:38.390137   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:26:38.390257   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:26:38.390268   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:26:38.390308   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:26:38.390406   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:26:38.390416   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:26:38.390456   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:26:38.390526   77627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.embed-certs-409322 san=[127.0.0.1 192.168.39.58 embed-certs-409322 localhost minikube]
	I0729 18:26:38.903674   77627 provision.go:177] copyRemoteCerts
	I0729 18:26:38.903758   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:26:38.903791   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.906662   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.906984   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.907018   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.907171   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.907360   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.907543   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.907667   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:38.992373   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 18:26:39.016465   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:26:39.039598   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:26:39.062415   77627 provision.go:87] duration metric: took 678.589364ms to configureAuth
	I0729 18:26:39.062443   77627 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:26:39.062622   77627 config.go:182] Loaded profile config "embed-certs-409322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:26:39.062696   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.065308   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.065703   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.065728   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.065902   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.066076   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.066244   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.066403   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.066553   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:39.066743   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:39.066759   77627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:26:39.326153   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:26:39.326176   77627 machine.go:97] duration metric: took 1.29475208s to provisionDockerMachine
	I0729 18:26:39.326186   77627 start.go:293] postStartSetup for "embed-certs-409322" (driver="kvm2")
	I0729 18:26:39.326195   77627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:26:39.326209   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.326603   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:26:39.326637   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.329049   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.329448   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.329476   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.329616   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.329822   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.330022   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.330186   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.413084   77627 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:26:39.417438   77627 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:26:39.417462   77627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:26:39.417535   77627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:26:39.417626   77627 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:26:39.417749   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:26:39.427256   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:39.451330   77627 start.go:296] duration metric: took 125.132889ms for postStartSetup
	I0729 18:26:39.451362   77627 fix.go:56] duration metric: took 19.499949606s for fixHost
	I0729 18:26:39.451380   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.453750   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.454047   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.454072   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.454237   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.454416   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.454570   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.454698   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.454864   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:39.455069   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:39.455080   77627 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:26:39.563211   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277599.531173461
	
	I0729 18:26:39.563238   77627 fix.go:216] guest clock: 1722277599.531173461
	I0729 18:26:39.563248   77627 fix.go:229] Guest: 2024-07-29 18:26:39.531173461 +0000 UTC Remote: 2024-07-29 18:26:39.451365859 +0000 UTC m=+269.697720486 (delta=79.807602ms)
	I0729 18:26:39.563278   77627 fix.go:200] guest clock delta is within tolerance: 79.807602ms
	I0729 18:26:39.563287   77627 start.go:83] releasing machines lock for "embed-certs-409322", held for 19.611902888s
	I0729 18:26:39.563318   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.563562   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:39.566225   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.566549   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.566575   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.566766   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567227   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567378   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567460   77627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:26:39.567501   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.567565   77627 ssh_runner.go:195] Run: cat /version.json
	I0729 18:26:39.567593   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.570113   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570330   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570536   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.570558   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570747   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.570754   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.570776   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570883   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.571004   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.571113   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.571211   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.571330   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.571438   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.571478   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.651235   77627 ssh_runner.go:195] Run: systemctl --version
	I0729 18:26:39.677383   77627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:26:39.824036   77627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:26:39.830027   77627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:26:39.830103   77627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:26:39.845939   77627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:26:39.845963   77627 start.go:495] detecting cgroup driver to use...
	I0729 18:26:39.846019   77627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:26:39.862867   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:26:39.878060   77627 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:26:39.878152   77627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:26:39.892471   77627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:26:39.906690   77627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:26:40.039725   77627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:26:40.201419   77627 docker.go:233] disabling docker service ...
	I0729 18:26:40.201489   77627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:26:40.222454   77627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:26:40.237523   77627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:26:40.371463   77627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:26:40.499676   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:26:40.514068   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:26:40.534051   77627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:26:40.534114   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.545364   77627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:26:40.545458   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.557113   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.568215   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.579433   77627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:26:40.591005   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.601933   77627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.621097   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.631960   77627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:26:40.642308   77627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:26:40.642383   77627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:26:40.656469   77627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:26:40.671251   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:26:40.784289   77627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:26:40.933837   77627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:26:40.933910   77627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:26:40.939031   77627 start.go:563] Will wait 60s for crictl version
	I0729 18:26:40.939086   77627 ssh_runner.go:195] Run: which crictl
	I0729 18:26:40.943166   77627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:26:40.985673   77627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:26:40.985753   77627 ssh_runner.go:195] Run: crio --version
	I0729 18:26:41.013973   77627 ssh_runner.go:195] Run: crio --version
	I0729 18:26:41.046080   77627 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:26:40.822462   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting to get IP...
	I0729 18:26:40.823526   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:40.823948   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:40.824000   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:40.823920   78947 retry.go:31] will retry after 262.026124ms: waiting for machine to come up
	I0729 18:26:41.087492   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.087961   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.087991   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.087913   78947 retry.go:31] will retry after 380.066984ms: waiting for machine to come up
	I0729 18:26:41.469728   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.470215   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.470244   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.470181   78947 retry.go:31] will retry after 293.069239ms: waiting for machine to come up
	I0729 18:26:41.764797   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.765277   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.765303   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.765228   78947 retry.go:31] will retry after 491.247116ms: waiting for machine to come up
	I0729 18:26:42.257741   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.258247   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.258275   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:42.258220   78947 retry.go:31] will retry after 693.832082ms: waiting for machine to come up
	I0729 18:26:42.953375   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.954146   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.954169   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:42.954051   78947 retry.go:31] will retry after 710.005115ms: waiting for machine to come up
	I0729 18:26:43.666068   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:43.666478   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:43.666504   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:43.666438   78947 retry.go:31] will retry after 1.077324053s: waiting for machine to come up
	I0729 18:26:41.047322   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:41.049993   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:41.050394   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:41.050433   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:41.050630   77627 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:26:41.054805   77627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:26:41.066926   77627 kubeadm.go:883] updating cluster {Name:embed-certs-409322 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:26:41.067053   77627 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:26:41.067115   77627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:26:41.103417   77627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:26:41.103489   77627 ssh_runner.go:195] Run: which lz4
	I0729 18:26:41.107793   77627 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:26:41.112161   77627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:26:41.112192   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:26:42.559564   77627 crio.go:462] duration metric: took 1.451801292s to copy over tarball
	I0729 18:26:42.559679   77627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:26:44.759513   77627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199801336s)
	I0729 18:26:44.759543   77627 crio.go:469] duration metric: took 2.199942615s to extract the tarball
	I0729 18:26:44.759554   77627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:26:44.744984   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:44.745450   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:44.745477   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:44.745403   78947 retry.go:31] will retry after 1.064257005s: waiting for machine to come up
	I0729 18:26:45.811414   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:45.811840   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:45.811880   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:45.811799   78947 retry.go:31] will retry after 1.30236943s: waiting for machine to come up
	I0729 18:26:47.116252   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:47.116668   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:47.116728   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:47.116647   78947 retry.go:31] will retry after 1.424333691s: waiting for machine to come up
	I0729 18:26:48.543481   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:48.543945   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:48.543973   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:48.543894   78947 retry.go:31] will retry after 2.106061522s: waiting for machine to come up
	I0729 18:26:44.798609   77627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:26:44.848236   77627 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:26:44.848257   77627 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:26:44.848265   77627 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.30.3 crio true true} ...
	I0729 18:26:44.848355   77627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-409322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:26:44.848415   77627 ssh_runner.go:195] Run: crio config
	I0729 18:26:44.901558   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:26:44.901584   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:26:44.901597   77627 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:26:44.901625   77627 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-409322 NodeName:embed-certs-409322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:26:44.901807   77627 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-409322"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:26:44.901875   77627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:26:44.912290   77627 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:26:44.912351   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:26:44.921801   77627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 18:26:44.940473   77627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:26:44.958445   77627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 18:26:44.976890   77627 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I0729 18:26:44.980974   77627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:26:44.994793   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:26:45.120453   77627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:26:45.138398   77627 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322 for IP: 192.168.39.58
	I0729 18:26:45.138419   77627 certs.go:194] generating shared ca certs ...
	I0729 18:26:45.138438   77627 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:26:45.138592   77627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:26:45.138643   77627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:26:45.138657   77627 certs.go:256] generating profile certs ...
	I0729 18:26:45.138751   77627 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/client.key
	I0729 18:26:45.138823   77627 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.key.4af4a6b9
	I0729 18:26:45.138889   77627 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.key
	I0729 18:26:45.139034   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:26:45.139074   77627 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:26:45.139088   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:26:45.139122   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:26:45.139161   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:26:45.139200   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:26:45.139305   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:45.139979   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:26:45.177194   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:26:45.206349   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:26:45.242291   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:26:45.277062   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 18:26:45.312447   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:26:45.345482   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:26:45.369151   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:26:45.394521   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:26:45.418579   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:26:45.443252   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:26:45.466770   77627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:26:45.484159   77627 ssh_runner.go:195] Run: openssl version
	I0729 18:26:45.490045   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:26:45.501166   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.505930   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.505988   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.511926   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:26:45.522860   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:26:45.533560   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.538411   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.538474   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.544485   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:26:45.555603   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:26:45.566407   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.570892   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.570944   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.576555   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:26:45.587780   77627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:26:45.592689   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:26:45.598981   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:26:45.604952   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:26:45.611225   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:26:45.617506   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:26:45.623744   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:26:45.629836   77627 kubeadm.go:392] StartCluster: {Name:embed-certs-409322 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:26:45.629947   77627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:26:45.630003   77627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:26:45.667768   77627 cri.go:89] found id: ""
	I0729 18:26:45.667853   77627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:26:45.678703   77627 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:26:45.678724   77627 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:26:45.678772   77627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:26:45.691979   77627 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:26:45.693237   77627 kubeconfig.go:125] found "embed-certs-409322" server: "https://192.168.39.58:8443"
	I0729 18:26:45.696093   77627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:26:45.708981   77627 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I0729 18:26:45.709017   77627 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:26:45.709030   77627 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:26:45.709088   77627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:26:45.748738   77627 cri.go:89] found id: ""
	I0729 18:26:45.748817   77627 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:26:45.775148   77627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:26:45.786631   77627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:26:45.786651   77627 kubeadm.go:157] found existing configuration files:
	
	I0729 18:26:45.786701   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:26:45.799453   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:26:45.799507   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:26:45.809691   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:26:45.819592   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:26:45.819638   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:26:45.832072   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:26:45.843769   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:26:45.843817   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:26:45.854649   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:26:45.863448   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:26:45.863504   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:26:45.872399   77627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:26:45.881992   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:46.012679   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.143076   77627 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.130359187s)
	I0729 18:26:47.143112   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.370854   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.446808   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.550087   77627 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:26:47.550191   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.050502   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.550499   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.608713   77627 api_server.go:72] duration metric: took 1.058625786s to wait for apiserver process to appear ...
	I0729 18:26:48.608745   77627 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:26:48.608773   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:51.829925   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:26:51.829963   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:26:51.829979   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:51.843474   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:26:51.843503   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:26:52.109882   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:52.117387   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:26:52.117415   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:26:52.608863   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:52.613809   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:26:52.613840   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:26:53.109430   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:53.115353   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I0729 18:26:53.122373   77627 api_server.go:141] control plane version: v1.30.3
	I0729 18:26:53.122411   77627 api_server.go:131] duration metric: took 4.513658045s to wait for apiserver health ...
	I0729 18:26:53.122420   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:26:53.122426   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:26:53.123807   77627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:26:50.651329   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:50.651724   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:50.651753   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:50.651678   78947 retry.go:31] will retry after 3.358167933s: waiting for machine to come up
	I0729 18:26:54.014102   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:54.014543   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:54.014576   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:54.014495   78947 retry.go:31] will retry after 4.372189125s: waiting for machine to come up
	I0729 18:26:53.124953   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:26:53.140970   77627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:26:53.179660   77627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:26:53.193885   77627 system_pods.go:59] 8 kube-system pods found
	I0729 18:26:53.193921   77627 system_pods.go:61] "coredns-7db6d8ff4d-vxvfc" [da2fd5a1-f57f-4374-99ee-9017e228176f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:26:53.193932   77627 system_pods.go:61] "etcd-embed-certs-409322" [3eca462f-6156-4858-a886-30d0d32faa35] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:26:53.193944   77627 system_pods.go:61] "kube-apiserver-embed-certs-409322" [4c6473c7-d7b8-4513-b800-7cab08748d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:26:53.193953   77627 system_pods.go:61] "kube-controller-manager-embed-certs-409322" [2dc47da0-3d24-49d8-91ae-13074468b423] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:26:53.193961   77627 system_pods.go:61] "kube-proxy-zf5jf" [a0b6fd82-d0b1-4821-a668-4cb6420b4860] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 18:26:53.193969   77627 system_pods.go:61] "kube-scheduler-embed-certs-409322" [ab422567-58e6-4f22-a7cf-391b35cc386c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:26:53.193977   77627 system_pods.go:61] "metrics-server-569cc877fc-flh27" [83d6c69c-200d-4ce2-80e9-b83ff5b6ebe9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:26:53.193989   77627 system_pods.go:61] "storage-provisioner" [73ff548f-26c3-4442-a9bd-bdac45261476] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 18:26:53.194002   77627 system_pods.go:74] duration metric: took 14.320361ms to wait for pod list to return data ...
	I0729 18:26:53.194014   77627 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:26:53.197826   77627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:26:53.197858   77627 node_conditions.go:123] node cpu capacity is 2
	I0729 18:26:53.197870   77627 node_conditions.go:105] duration metric: took 3.850077ms to run NodePressure ...
	I0729 18:26:53.197884   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:53.467868   77627 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:26:53.471886   77627 kubeadm.go:739] kubelet initialised
	I0729 18:26:53.471905   77627 kubeadm.go:740] duration metric: took 4.016417ms waiting for restarted kubelet to initialise ...
	I0729 18:26:53.471912   77627 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:26:53.476695   77627 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.480449   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.480481   77627 pod_ready.go:81] duration metric: took 3.766ms for pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.480491   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.480501   77627 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.484712   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "etcd-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.484739   77627 pod_ready.go:81] duration metric: took 4.228077ms for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.484750   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "etcd-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.484759   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.488510   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.488532   77627 pod_ready.go:81] duration metric: took 3.76371ms for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.488539   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.488545   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:58.387940   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.388358   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Found IP for machine: 192.168.61.244
	I0729 18:26:58.388383   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has current primary IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.388396   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Reserving static IP address...
	I0729 18:26:58.388794   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-502055", mac: "52:54:00:ae:63:e1", ip: "192.168.61.244"} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.388826   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Reserved static IP address: 192.168.61.244
	I0729 18:26:58.388848   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | skip adding static IP to network mk-default-k8s-diff-port-502055 - found existing host DHCP lease matching {name: "default-k8s-diff-port-502055", mac: "52:54:00:ae:63:e1", ip: "192.168.61.244"}
	I0729 18:26:58.388873   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for SSH to be available...
	I0729 18:26:58.388894   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Getting to WaitForSSH function...
	I0729 18:26:58.390937   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.391281   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.391319   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.391381   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Using SSH client type: external
	I0729 18:26:58.391408   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa (-rw-------)
	I0729 18:26:58.391457   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:26:58.391490   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | About to run SSH command:
	I0729 18:26:58.391511   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | exit 0
	I0729 18:26:58.518399   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | SSH cmd err, output: <nil>: 
	I0729 18:26:58.518782   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetConfigRaw
	I0729 18:26:58.519492   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:58.522245   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.522580   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.522615   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.522862   77859 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/config.json ...
	I0729 18:26:58.523037   77859 machine.go:94] provisionDockerMachine start ...
	I0729 18:26:58.523053   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:58.523258   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.525654   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.525998   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.526018   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.526185   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.526351   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.526555   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.526705   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.526874   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.527066   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.527079   77859 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:26:58.635267   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:26:58.635302   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.635524   77859 buildroot.go:166] provisioning hostname "default-k8s-diff-port-502055"
	I0729 18:26:58.635550   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.635789   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.638770   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.639235   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.639265   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.639371   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.639564   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.639729   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.639865   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.640048   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.640227   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.640245   77859 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-502055 && echo "default-k8s-diff-port-502055" | sudo tee /etc/hostname
	I0729 18:26:58.760577   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-502055
	
	I0729 18:26:58.760603   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.763294   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.763591   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.763625   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.763766   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.763970   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.764159   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.764311   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.764480   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.764641   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.764659   77859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-502055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-502055/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-502055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:26:58.879366   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:58.879400   77859 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:26:58.879440   77859 buildroot.go:174] setting up certificates
	I0729 18:26:58.879451   77859 provision.go:84] configureAuth start
	I0729 18:26:58.879463   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.879735   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:58.882335   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.882652   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.882680   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.882848   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.885023   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.885313   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.885339   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.885433   77859 provision.go:143] copyHostCerts
	I0729 18:26:58.885479   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:26:58.885488   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:26:58.885544   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:26:58.885633   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:26:58.885641   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:26:58.885660   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:26:58.885709   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:26:58.885716   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:26:58.885733   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:26:58.885783   77859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-502055 san=[127.0.0.1 192.168.61.244 default-k8s-diff-port-502055 localhost minikube]
	I0729 18:26:59.130657   77859 provision.go:177] copyRemoteCerts
	I0729 18:26:59.130724   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:26:59.130749   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.133536   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.133898   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.133922   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.134079   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.134260   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.134421   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.134530   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.216614   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 18:26:59.240540   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:26:59.267350   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:26:59.294003   77859 provision.go:87] duration metric: took 414.539559ms to configureAuth
	I0729 18:26:59.294032   77859 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:26:59.294222   77859 config.go:182] Loaded profile config "default-k8s-diff-port-502055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:26:59.294293   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.296911   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.297285   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.297311   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.297450   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.297656   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.297804   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.297935   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.298102   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:59.298265   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:59.298281   77859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:26:59.557084   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:26:59.557131   77859 machine.go:97] duration metric: took 1.034080964s to provisionDockerMachine
	I0729 18:26:59.557148   77859 start.go:293] postStartSetup for "default-k8s-diff-port-502055" (driver="kvm2")
	I0729 18:26:59.557165   77859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:26:59.557191   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.557496   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:26:59.557529   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.559962   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.560255   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.560276   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.560461   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.560635   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.560798   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.560953   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.645623   77859 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:26:59.650416   77859 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:26:59.650447   77859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:26:59.650531   77859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:26:59.650624   77859 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:26:59.650730   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:26:59.660864   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:59.685728   77859 start.go:296] duration metric: took 128.564534ms for postStartSetup
	I0729 18:26:59.685767   77859 fix.go:56] duration metric: took 20.122314731s for fixHost
	I0729 18:26:59.685791   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.688401   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.688773   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.688801   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.688978   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.689157   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.689293   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.689401   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.689551   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:59.689712   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:59.689722   77859 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:26:55.494570   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:26:57.495784   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:26:59.799712   78080 start.go:364] duration metric: took 4m12.475660562s to acquireMachinesLock for "old-k8s-version-386663"
	I0729 18:26:59.799786   78080 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:59.799796   78080 fix.go:54] fixHost starting: 
	I0729 18:26:59.800184   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:59.800215   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:59.816885   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37963
	I0729 18:26:59.817336   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:59.817822   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:26:59.817851   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:59.818283   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:59.818505   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:26:59.818671   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetState
	I0729 18:26:59.820232   78080 fix.go:112] recreateIfNeeded on old-k8s-version-386663: state=Stopped err=<nil>
	I0729 18:26:59.820254   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	W0729 18:26:59.820426   78080 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:59.822140   78080 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-386663" ...
	I0729 18:26:59.799573   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277619.755982716
	
	I0729 18:26:59.799603   77859 fix.go:216] guest clock: 1722277619.755982716
	I0729 18:26:59.799614   77859 fix.go:229] Guest: 2024-07-29 18:26:59.755982716 +0000 UTC Remote: 2024-07-29 18:26:59.685771603 +0000 UTC m=+259.980298680 (delta=70.211113ms)
	I0729 18:26:59.799637   77859 fix.go:200] guest clock delta is within tolerance: 70.211113ms
	I0729 18:26:59.799641   77859 start.go:83] releasing machines lock for "default-k8s-diff-port-502055", held for 20.236230068s
	I0729 18:26:59.799672   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.799944   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:59.802636   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.802983   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.803013   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.803248   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.803740   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.803927   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.804023   77859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:26:59.804070   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.804193   77859 ssh_runner.go:195] Run: cat /version.json
	I0729 18:26:59.804229   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.807037   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807117   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807395   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.807435   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807528   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.807547   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.807565   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807708   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.807717   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.807910   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.807936   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.808043   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.808098   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.808244   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.920371   77859 ssh_runner.go:195] Run: systemctl --version
	I0729 18:26:59.926620   77859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:00.072161   77859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:00.079273   77859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:00.079340   77859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:00.096528   77859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:00.096550   77859 start.go:495] detecting cgroup driver to use...
	I0729 18:27:00.096610   77859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:00.113690   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:00.129058   77859 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:00.129126   77859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:00.143930   77859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:00.158085   77859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:00.296398   77859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:00.482313   77859 docker.go:233] disabling docker service ...
	I0729 18:27:00.482459   77859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:00.501504   77859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:00.520932   77859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:00.657805   77859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:00.792064   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:00.807790   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:00.827373   77859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:27:00.827423   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.838281   77859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:00.838340   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.849533   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.860820   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.872359   77859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:00.883904   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.895589   77859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.914639   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.926278   77859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:00.936329   77859 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:00.936383   77859 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:00.951219   77859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:00.966530   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:01.086665   77859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:01.233627   77859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:01.233703   77859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:01.241055   77859 start.go:563] Will wait 60s for crictl version
	I0729 18:27:01.241122   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:27:01.244875   77859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:01.284013   77859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:01.284103   77859 ssh_runner.go:195] Run: crio --version
	I0729 18:27:01.315493   77859 ssh_runner.go:195] Run: crio --version
	I0729 18:27:01.348781   77859 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:26:59.823421   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .Start
	I0729 18:26:59.823575   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring networks are active...
	I0729 18:26:59.824264   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring network default is active
	I0729 18:26:59.824641   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring network mk-old-k8s-version-386663 is active
	I0729 18:26:59.825024   78080 main.go:141] libmachine: (old-k8s-version-386663) Getting domain xml...
	I0729 18:26:59.825885   78080 main.go:141] libmachine: (old-k8s-version-386663) Creating domain...
	I0729 18:27:01.104265   78080 main.go:141] libmachine: (old-k8s-version-386663) Waiting to get IP...
	I0729 18:27:01.105349   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.105790   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.105836   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.105761   79098 retry.go:31] will retry after 308.255094ms: waiting for machine to come up
	I0729 18:27:01.415431   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.415999   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.416030   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.415952   79098 retry.go:31] will retry after 236.525723ms: waiting for machine to come up
	I0729 18:27:01.654767   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.655279   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.655312   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.655247   79098 retry.go:31] will retry after 311.010394ms: waiting for machine to come up
	I0729 18:27:01.967850   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.968374   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.968404   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.968333   79098 retry.go:31] will retry after 468.477549ms: waiting for machine to come up
	I0729 18:27:01.350059   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:27:01.352945   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:01.353398   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:01.353429   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:01.353630   77859 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:01.357955   77859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:01.371879   77859 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-502055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:01.372034   77859 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:27:01.372100   77859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:01.412356   77859 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:27:01.412423   77859 ssh_runner.go:195] Run: which lz4
	I0729 18:27:01.417768   77859 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:27:01.422809   77859 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:27:01.422836   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:27:02.909800   77859 crio.go:462] duration metric: took 1.492088664s to copy over tarball
	I0729 18:27:02.909868   77859 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:26:59.995351   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:01.999130   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:04.012357   77627 pod_ready.go:92] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.012385   77627 pod_ready.go:81] duration metric: took 10.523832262s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.012398   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zf5jf" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.025409   77627 pod_ready.go:92] pod "kube-proxy-zf5jf" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.025448   77627 pod_ready.go:81] duration metric: took 13.042254ms for pod "kube-proxy-zf5jf" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.025461   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.036057   77627 pod_ready.go:92] pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.036078   77627 pod_ready.go:81] duration metric: took 10.608531ms for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.036090   77627 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:02.438066   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:02.438657   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:02.438686   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:02.438618   79098 retry.go:31] will retry after 601.056921ms: waiting for machine to come up
	I0729 18:27:03.041582   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:03.042097   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:03.042127   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:03.042040   79098 retry.go:31] will retry after 712.049848ms: waiting for machine to come up
	I0729 18:27:03.755536   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:03.756010   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:03.756040   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:03.755988   79098 retry.go:31] will retry after 1.092318096s: waiting for machine to come up
	I0729 18:27:04.849745   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:04.850202   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:04.850226   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:04.850147   79098 retry.go:31] will retry after 903.54457ms: waiting for machine to come up
	I0729 18:27:05.754781   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:05.755193   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:05.755218   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:05.755157   79098 retry.go:31] will retry after 1.693512671s: waiting for machine to come up
	I0729 18:27:05.188101   77859 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.27820184s)
	I0729 18:27:05.188132   77859 crio.go:469] duration metric: took 2.278304723s to extract the tarball
	I0729 18:27:05.188140   77859 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:27:05.227453   77859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:05.274530   77859 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:27:05.274560   77859 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:27:05.274571   77859 kubeadm.go:934] updating node { 192.168.61.244 8444 v1.30.3 crio true true} ...
	I0729 18:27:05.274708   77859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-502055 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:05.274788   77859 ssh_runner.go:195] Run: crio config
	I0729 18:27:05.320697   77859 cni.go:84] Creating CNI manager for ""
	I0729 18:27:05.320725   77859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:05.320741   77859 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:05.320774   77859 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.244 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-502055 NodeName:default-k8s-diff-port-502055 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:27:05.320948   77859 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.244
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-502055"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:05.321028   77859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:27:05.331541   77859 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:05.331609   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:05.341433   77859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 18:27:05.358696   77859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:27:05.376531   77859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 18:27:05.394349   77859 ssh_runner.go:195] Run: grep 192.168.61.244	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:05.398156   77859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:05.411839   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:05.561467   77859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:05.583184   77859 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055 for IP: 192.168.61.244
	I0729 18:27:05.583209   77859 certs.go:194] generating shared ca certs ...
	I0729 18:27:05.583251   77859 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:05.583406   77859 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:05.583460   77859 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:05.583473   77859 certs.go:256] generating profile certs ...
	I0729 18:27:05.583577   77859 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/client.key
	I0729 18:27:05.583642   77859 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.key.2edc4448
	I0729 18:27:05.583692   77859 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.key
	I0729 18:27:05.583835   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:05.583872   77859 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:05.583886   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:05.583917   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:05.583957   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:05.583991   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:05.584048   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:05.584726   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:05.624996   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:05.670153   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:05.715354   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:05.743807   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 18:27:05.777366   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:05.802152   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:05.826974   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:27:05.850417   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:05.873185   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:05.899387   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:05.927963   77859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:05.947817   77859 ssh_runner.go:195] Run: openssl version
	I0729 18:27:05.955635   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:05.969765   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.974559   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.974606   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.980557   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:05.991819   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:06.004961   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.009999   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.010074   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.016045   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:06.027698   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:06.039648   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.045057   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.045130   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.051127   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:06.062761   77859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:06.068832   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:06.076652   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:06.084517   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:06.091125   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:06.097346   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:06.103428   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:06.109312   77859 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-502055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:06.109403   77859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:06.109440   77859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:06.153439   77859 cri.go:89] found id: ""
	I0729 18:27:06.153528   77859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:06.166412   77859 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:06.166434   77859 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:06.166486   77859 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:06.183064   77859 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:06.184168   77859 kubeconfig.go:125] found "default-k8s-diff-port-502055" server: "https://192.168.61.244:8444"
	I0729 18:27:06.186283   77859 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:06.197418   77859 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.244
	I0729 18:27:06.197444   77859 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:06.197454   77859 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:06.197506   77859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:06.237753   77859 cri.go:89] found id: ""
	I0729 18:27:06.237839   77859 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:06.257323   77859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:06.269157   77859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:06.269176   77859 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:06.269229   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 18:27:06.279313   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:06.279369   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:06.292141   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 18:27:06.303961   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:06.304028   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:06.316051   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 18:27:06.328004   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:06.328064   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:06.340357   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 18:27:06.352021   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:06.352068   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:06.364479   77859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:06.375313   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:06.498692   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:07.853845   77859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.355105254s)
	I0729 18:27:07.853882   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.069616   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.144574   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.225236   77859 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:08.225336   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:08.725789   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:09.226271   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:09.270268   77859 api_server.go:72] duration metric: took 1.045028259s to wait for apiserver process to appear ...
	I0729 18:27:09.270298   77859 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:27:09.270320   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:09.270877   77859 api_server.go:269] stopped: https://192.168.61.244:8444/healthz: Get "https://192.168.61.244:8444/healthz": dial tcp 192.168.61.244:8444: connect: connection refused
	I0729 18:27:06.043838   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:08.044382   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:07.451087   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:07.451659   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:07.451688   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:07.451607   79098 retry.go:31] will retry after 1.734643072s: waiting for machine to come up
	I0729 18:27:09.188407   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:09.188963   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:09.188997   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:09.188900   79098 retry.go:31] will retry after 2.010973572s: waiting for machine to come up
	I0729 18:27:11.201171   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:11.201586   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:11.201620   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:11.201535   79098 retry.go:31] will retry after 3.178533437s: waiting for machine to come up
	I0729 18:27:09.771273   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.506136   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:27:12.506166   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:27:12.506179   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.518847   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:27:12.518881   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:27:12.771281   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.775798   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:27:12.775832   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:27:13.270383   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:13.281935   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:27:13.281975   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:27:13.770440   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:13.776004   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 200:
	ok
	I0729 18:27:13.783210   77859 api_server.go:141] control plane version: v1.30.3
	I0729 18:27:13.783237   77859 api_server.go:131] duration metric: took 4.512933596s to wait for apiserver health ...
	I0729 18:27:13.783247   77859 cni.go:84] Creating CNI manager for ""
	I0729 18:27:13.783253   77859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:13.785148   77859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:27:13.786485   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:27:13.814986   77859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:27:13.860557   77859 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:27:13.872823   77859 system_pods.go:59] 8 kube-system pods found
	I0729 18:27:13.872864   77859 system_pods.go:61] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:27:13.872871   77859 system_pods.go:61] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:27:13.872879   77859 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:27:13.872885   77859 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:27:13.872891   77859 system_pods.go:61] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 18:27:13.872898   77859 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:27:13.872903   77859 system_pods.go:61] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:27:13.872912   77859 system_pods.go:61] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 18:27:13.872920   77859 system_pods.go:74] duration metric: took 12.342162ms to wait for pod list to return data ...
	I0729 18:27:13.872929   77859 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:27:13.879353   77859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:27:13.879384   77859 node_conditions.go:123] node cpu capacity is 2
	I0729 18:27:13.879396   77859 node_conditions.go:105] duration metric: took 6.459994ms to run NodePressure ...
	I0729 18:27:13.879416   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:14.172203   77859 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:27:14.178467   77859 kubeadm.go:739] kubelet initialised
	I0729 18:27:14.178490   77859 kubeadm.go:740] duration metric: took 6.259862ms waiting for restarted kubelet to initialise ...
	I0729 18:27:14.178499   77859 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:14.184872   77859 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.190847   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.190871   77859 pod_ready.go:81] duration metric: took 5.974917ms for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.190879   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.190886   77859 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.195570   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.195593   77859 pod_ready.go:81] duration metric: took 4.699847ms for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.195603   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.195610   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.199460   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.199480   77859 pod_ready.go:81] duration metric: took 3.863218ms for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.199489   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.199494   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.264725   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.264759   77859 pod_ready.go:81] duration metric: took 65.256372ms for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.264774   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.264781   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.664064   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-proxy-cgdm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.664089   77859 pod_ready.go:81] duration metric: took 399.300184ms for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.664100   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-proxy-cgdm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.664109   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:10.044797   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:12.543553   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:15.064029   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.064059   77859 pod_ready.go:81] duration metric: took 399.939139ms for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:15.064074   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.064082   77859 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:15.464538   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.464564   77859 pod_ready.go:81] duration metric: took 400.472397ms for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:15.464584   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.464592   77859 pod_ready.go:38] duration metric: took 1.286083847s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:15.464609   77859 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:27:15.478197   77859 ops.go:34] apiserver oom_adj: -16
	I0729 18:27:15.478220   77859 kubeadm.go:597] duration metric: took 9.311779975s to restartPrimaryControlPlane
	I0729 18:27:15.478229   77859 kubeadm.go:394] duration metric: took 9.368934157s to StartCluster
	I0729 18:27:15.478247   77859 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:15.478311   77859 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:27:15.479920   77859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:15.480159   77859 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:27:15.480244   77859 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:27:15.480322   77859 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-502055"
	I0729 18:27:15.480355   77859 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-502055"
	I0729 18:27:15.480356   77859 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-502055"
	W0729 18:27:15.480368   77859 addons.go:243] addon storage-provisioner should already be in state true
	I0729 18:27:15.480371   77859 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-502055"
	I0729 18:27:15.480396   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.480397   77859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-502055"
	I0729 18:27:15.480402   77859 config.go:182] Loaded profile config "default-k8s-diff-port-502055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:27:15.480415   77859 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-502055"
	W0729 18:27:15.480426   77859 addons.go:243] addon metrics-server should already be in state true
	I0729 18:27:15.480460   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.480709   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480723   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480738   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.480738   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.480914   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480943   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.482004   77859 out.go:177] * Verifying Kubernetes components...
	I0729 18:27:15.483504   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:15.495748   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35469
	I0729 18:27:15.495965   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43251
	I0729 18:27:15.495977   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
	I0729 18:27:15.496147   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496324   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496433   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496604   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496622   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496760   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496778   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496914   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496930   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496982   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497086   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497219   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497644   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.497672   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.498076   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.498408   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.498449   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.501769   77859 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-502055"
	W0729 18:27:15.501790   77859 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:27:15.501814   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.502132   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.502163   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.516862   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0729 18:27:15.517070   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0729 18:27:15.517336   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.517525   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.517845   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.517877   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.518255   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.518356   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.518418   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.518657   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.518793   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.519009   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.520045   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I0729 18:27:15.520489   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.520613   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.520785   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.520962   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.520979   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.521295   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.521697   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.521712   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.522950   77859 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:15.522950   77859 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:27:15.524246   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:27:15.524268   77859 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:27:15.524291   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.524355   77859 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:27:15.524370   77859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:27:15.524388   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.527946   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528008   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528609   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.528645   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528678   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.528691   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528723   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.528939   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.528953   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.529101   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.529150   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.529218   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.529524   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.529716   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.539969   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41273
	I0729 18:27:15.540410   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.540887   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.540913   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.541351   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.541675   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.543494   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.543728   77859 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:27:15.543744   77859 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:27:15.543762   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.546809   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.547225   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.547250   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.547405   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.547595   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.547736   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.547859   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.662741   77859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:15.681179   77859 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-502055" to be "Ready" ...
	I0729 18:27:15.754691   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:27:15.767498   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:27:15.767515   77859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:27:15.781857   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:27:15.801619   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:27:15.801645   77859 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:27:15.823663   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:27:15.823690   77859 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:27:15.847827   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:27:16.818178   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.063432468s)
	I0729 18:27:16.818180   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.036288517s)
	I0729 18:27:16.818268   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818234   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818290   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818307   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818677   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.818680   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.818694   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.818710   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.818723   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818724   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.818735   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818740   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.818755   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818766   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818989   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.819000   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.819004   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.819017   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.819014   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.824028   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.824047   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.824268   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.824292   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.877321   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029455089s)
	I0729 18:27:16.877378   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.877393   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.877718   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.877767   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.877790   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.877801   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.878030   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.878047   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.878061   77859 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-502055"
	I0729 18:27:16.879704   77859 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 18:27:14.381238   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:14.381648   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:14.381677   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:14.381609   79098 retry.go:31] will retry after 4.005160817s: waiting for machine to come up
	I0729 18:27:16.880972   77859 addons.go:510] duration metric: took 1.400728317s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 18:27:17.685480   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:19.687853   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.042487   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:17.043250   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:19.045374   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:19.859418   77394 start.go:364] duration metric: took 54.906462088s to acquireMachinesLock for "no-preload-888056"
	I0729 18:27:19.859470   77394 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:27:19.859478   77394 fix.go:54] fixHost starting: 
	I0729 18:27:19.859850   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:19.859896   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:19.876798   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46323
	I0729 18:27:19.877254   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:19.877674   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:27:19.877709   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:19.878087   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:19.878257   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:19.878399   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:27:19.879875   77394 fix.go:112] recreateIfNeeded on no-preload-888056: state=Stopped err=<nil>
	I0729 18:27:19.879909   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	W0729 18:27:19.880054   77394 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:27:19.882098   77394 out.go:177] * Restarting existing kvm2 VM for "no-preload-888056" ...
	I0729 18:27:18.388470   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.388971   78080 main.go:141] libmachine: (old-k8s-version-386663) Found IP for machine: 192.168.50.70
	I0729 18:27:18.388989   78080 main.go:141] libmachine: (old-k8s-version-386663) Reserving static IP address...
	I0729 18:27:18.388999   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has current primary IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.389431   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "old-k8s-version-386663", mac: "52:54:00:78:b6:ac", ip: "192.168.50.70"} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.389459   78080 main.go:141] libmachine: (old-k8s-version-386663) Reserved static IP address: 192.168.50.70
	I0729 18:27:18.389477   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | skip adding static IP to network mk-old-k8s-version-386663 - found existing host DHCP lease matching {name: "old-k8s-version-386663", mac: "52:54:00:78:b6:ac", ip: "192.168.50.70"}
	I0729 18:27:18.389493   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Getting to WaitForSSH function...
	I0729 18:27:18.389515   78080 main.go:141] libmachine: (old-k8s-version-386663) Waiting for SSH to be available...
	I0729 18:27:18.391523   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.391916   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.391941   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.392062   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using SSH client type: external
	I0729 18:27:18.392088   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa (-rw-------)
	I0729 18:27:18.392119   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:27:18.392134   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | About to run SSH command:
	I0729 18:27:18.392150   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | exit 0
	I0729 18:27:18.514735   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | SSH cmd err, output: <nil>: 
	I0729 18:27:18.515114   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetConfigRaw
	I0729 18:27:18.515736   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:18.518194   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.518615   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.518651   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.518879   78080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json ...
	I0729 18:27:18.519090   78080 machine.go:94] provisionDockerMachine start ...
	I0729 18:27:18.519113   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:18.519322   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.521434   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.521824   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.521846   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.521996   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.522181   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.522349   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.522514   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.522724   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.522960   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.522975   78080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:27:18.622960   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:27:18.622989   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.623249   78080 buildroot.go:166] provisioning hostname "old-k8s-version-386663"
	I0729 18:27:18.623277   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.623461   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.626009   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.626376   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.626406   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.626649   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.626876   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.627141   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.627301   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.627474   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.627669   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.627683   78080 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386663 && echo "old-k8s-version-386663" | sudo tee /etc/hostname
	I0729 18:27:18.748137   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386663
	
	I0729 18:27:18.748165   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.751546   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.751882   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.751916   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.752086   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.752270   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.752409   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.752550   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.752747   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.753004   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.753031   78080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386663/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:27:18.863358   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:27:18.863389   78080 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:27:18.863415   78080 buildroot.go:174] setting up certificates
	I0729 18:27:18.863425   78080 provision.go:84] configureAuth start
	I0729 18:27:18.863436   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.863754   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:18.866285   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.866641   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.866668   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.866797   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.868886   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.869241   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.869270   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.869404   78080 provision.go:143] copyHostCerts
	I0729 18:27:18.869459   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:27:18.869468   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:27:18.869522   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:27:18.869614   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:27:18.869624   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:27:18.869652   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:27:18.869740   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:27:18.869750   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:27:18.869772   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:27:18.869833   78080 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386663 san=[127.0.0.1 192.168.50.70 localhost minikube old-k8s-version-386663]
	I0729 18:27:19.142743   78080 provision.go:177] copyRemoteCerts
	I0729 18:27:19.142808   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:27:19.142842   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.145484   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.145843   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.145872   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.146092   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.146334   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.146532   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.146692   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.230725   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:27:19.255862   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 18:27:19.290922   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:27:19.317519   78080 provision.go:87] duration metric: took 454.081583ms to configureAuth
	I0729 18:27:19.317549   78080 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:27:19.317766   78080 config.go:182] Loaded profile config "old-k8s-version-386663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:27:19.317854   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.320636   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.321074   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.321110   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.321346   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.321603   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.321782   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.321959   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.322158   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:19.322336   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:19.322351   78080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:27:19.626713   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:27:19.626737   78080 machine.go:97] duration metric: took 1.107631867s to provisionDockerMachine
	I0729 18:27:19.626749   78080 start.go:293] postStartSetup for "old-k8s-version-386663" (driver="kvm2")
	I0729 18:27:19.626763   78080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:27:19.626834   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.627168   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:27:19.627197   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.629389   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.629751   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.629782   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.629907   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.630102   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.630302   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.630460   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.709702   78080 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:27:19.713879   78080 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:27:19.713913   78080 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:27:19.713994   78080 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:27:19.714093   78080 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:27:19.714215   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:27:19.725226   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:19.751727   78080 start.go:296] duration metric: took 124.964072ms for postStartSetup
	I0729 18:27:19.751767   78080 fix.go:56] duration metric: took 19.951972224s for fixHost
	I0729 18:27:19.751796   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.754481   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.754843   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.754877   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.755107   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.755321   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.755482   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.755663   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.755829   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:19.756012   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:19.756024   78080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:27:19.859279   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277639.831700968
	
	I0729 18:27:19.859302   78080 fix.go:216] guest clock: 1722277639.831700968
	I0729 18:27:19.859309   78080 fix.go:229] Guest: 2024-07-29 18:27:19.831700968 +0000 UTC Remote: 2024-07-29 18:27:19.751770935 +0000 UTC m=+272.565043390 (delta=79.930033ms)
	I0729 18:27:19.859327   78080 fix.go:200] guest clock delta is within tolerance: 79.930033ms
	I0729 18:27:19.859332   78080 start.go:83] releasing machines lock for "old-k8s-version-386663", held for 20.059569122s
	I0729 18:27:19.859353   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.859661   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:19.862741   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.863225   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.863261   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.863449   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864092   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864309   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864392   78080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:27:19.864432   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.864547   78080 ssh_runner.go:195] Run: cat /version.json
	I0729 18:27:19.864572   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.867636   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.867798   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868019   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.868044   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868178   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.868330   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.868356   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868360   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.868500   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.868587   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.868667   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.868754   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.868910   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.869046   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.947441   78080 ssh_runner.go:195] Run: systemctl --version
	I0729 18:27:19.967868   78080 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:20.114336   78080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:20.121716   78080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:20.121793   78080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:20.143272   78080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:20.143298   78080 start.go:495] detecting cgroup driver to use...
	I0729 18:27:20.143385   78080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:20.162433   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:20.178310   78080 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:20.178397   78080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:20.194091   78080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:20.209796   78080 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:20.341466   78080 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:20.514215   78080 docker.go:233] disabling docker service ...
	I0729 18:27:20.514338   78080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:20.531018   78080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:20.551839   78080 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:20.680430   78080 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:20.834782   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:20.852454   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:20.874962   78080 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 18:27:20.875017   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.886550   78080 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:20.886619   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.899344   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.914254   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.927308   78080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:20.939807   78080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:20.951648   78080 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:20.951738   78080 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:20.967918   78080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:20.979872   78080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:21.125398   78080 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:21.290736   78080 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:21.290816   78080 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:21.296922   78080 start.go:563] Will wait 60s for crictl version
	I0729 18:27:21.296987   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:21.302200   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:21.350783   78080 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:21.350919   78080 ssh_runner.go:195] Run: crio --version
	I0729 18:27:21.391539   78080 ssh_runner.go:195] Run: crio --version
	I0729 18:27:21.441225   78080 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 18:27:21.442583   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:21.446238   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:21.446728   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:21.446756   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:21.446988   78080 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:21.452537   78080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:21.470394   78080 kubeadm.go:883] updating cluster {Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:21.470555   78080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:27:21.470610   78080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:21.531670   78080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:27:21.531742   78080 ssh_runner.go:195] Run: which lz4
	I0729 18:27:21.536436   78080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:27:21.542100   78080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:27:21.542139   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 18:27:19.883514   77394 main.go:141] libmachine: (no-preload-888056) Calling .Start
	I0729 18:27:19.883693   77394 main.go:141] libmachine: (no-preload-888056) Ensuring networks are active...
	I0729 18:27:19.884447   77394 main.go:141] libmachine: (no-preload-888056) Ensuring network default is active
	I0729 18:27:19.884847   77394 main.go:141] libmachine: (no-preload-888056) Ensuring network mk-no-preload-888056 is active
	I0729 18:27:19.885240   77394 main.go:141] libmachine: (no-preload-888056) Getting domain xml...
	I0729 18:27:19.886133   77394 main.go:141] libmachine: (no-preload-888056) Creating domain...
	I0729 18:27:21.226599   77394 main.go:141] libmachine: (no-preload-888056) Waiting to get IP...
	I0729 18:27:21.227673   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.228215   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.228278   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.228178   79288 retry.go:31] will retry after 290.676407ms: waiting for machine to come up
	I0729 18:27:21.520818   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.521458   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.521480   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.521360   79288 retry.go:31] will retry after 266.145355ms: waiting for machine to come up
	I0729 18:27:21.789603   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.790170   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.790200   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.790137   79288 retry.go:31] will retry after 464.137123ms: waiting for machine to come up
	I0729 18:27:22.255586   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:22.256159   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:22.256184   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:22.256098   79288 retry.go:31] will retry after 562.330595ms: waiting for machine to come up
	I0729 18:27:21.691280   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:23.188725   77859 node_ready.go:49] node "default-k8s-diff-port-502055" has status "Ready":"True"
	I0729 18:27:23.188758   77859 node_ready.go:38] duration metric: took 7.507549954s for node "default-k8s-diff-port-502055" to be "Ready" ...
	I0729 18:27:23.188772   77859 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:23.197714   77859 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.204037   77859 pod_ready.go:92] pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:23.204065   77859 pod_ready.go:81] duration metric: took 6.32123ms for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.204086   77859 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.211765   77859 pod_ready.go:92] pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:23.211791   77859 pod_ready.go:81] duration metric: took 7.69614ms for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.211803   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:21.544757   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:24.043649   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:23.329902   78080 crio.go:462] duration metric: took 1.793505279s to copy over tarball
	I0729 18:27:23.329979   78080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:27:26.453768   78080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.123735537s)
	I0729 18:27:26.453800   78080 crio.go:469] duration metric: took 3.123869338s to extract the tarball
	I0729 18:27:26.453809   78080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:27:26.501748   78080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:26.538093   78080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:27:26.538124   78080 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:27:26.538226   78080 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.538297   78080 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 18:27:26.538387   78080 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.538232   78080 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.538441   78080 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.538303   78080 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.538277   78080 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.538783   78080 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.540806   78080 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.540823   78080 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.540847   78080 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.540858   78080 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 18:27:26.540806   78080 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.540894   78080 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.540937   78080 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.540987   78080 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.700993   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.704402   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.712647   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.714034   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.715935   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.753888   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.758588   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 18:27:26.837981   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.844473   78080 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 18:27:26.844532   78080 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.844578   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.877082   78080 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 18:27:26.877134   78080 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.877183   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.889792   78080 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 18:27:26.889887   78080 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.889842   78080 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 18:27:26.889944   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.889983   78080 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.890034   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.916338   78080 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 18:27:26.916388   78080 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.916440   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.916437   78080 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 18:27:26.916540   78080 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.916581   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.942747   78080 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 18:27:26.942794   78080 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 18:27:26.942839   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:27.056976   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:27.056976   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 18:27:27.057045   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:27.057071   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:27.057101   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:27.057152   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:27.057178   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 18:27:27.219396   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 18:27:22.820490   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:22.820969   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:22.820993   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:22.820906   79288 retry.go:31] will retry after 728.452145ms: waiting for machine to come up
	I0729 18:27:23.550655   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:23.551337   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:23.551361   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:23.551287   79288 retry.go:31] will retry after 782.583051ms: waiting for machine to come up
	I0729 18:27:24.335785   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:24.336257   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:24.336310   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:24.336235   79288 retry.go:31] will retry after 1.040109521s: waiting for machine to come up
	I0729 18:27:25.377676   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:25.378187   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:25.378231   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:25.378153   79288 retry.go:31] will retry after 1.276093038s: waiting for machine to come up
	I0729 18:27:26.655479   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:26.655922   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:26.655950   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:26.655872   79288 retry.go:31] will retry after 1.267687539s: waiting for machine to come up
	I0729 18:27:25.219175   77859 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.225735   77859 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.718741   77859 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.718772   77859 pod_ready.go:81] duration metric: took 4.506959705s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.718786   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.723687   77859 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.723709   77859 pod_ready.go:81] duration metric: took 4.915901ms for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.723720   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.728504   77859 pod_ready.go:92] pod "kube-proxy-cgdm8" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.728526   77859 pod_ready.go:81] duration metric: took 4.797185ms for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.728538   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.733036   77859 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.733061   77859 pod_ready.go:81] duration metric: took 4.514471ms for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.733073   77859 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:29.739966   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:26.044607   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:28.543664   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.219541   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 18:27:27.223329   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 18:27:27.223406   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 18:27:27.223450   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 18:27:27.223492   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 18:27:27.223536   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 18:27:27.223567   78080 cache_images.go:92] duration metric: took 685.427642ms to LoadCachedImages
	W0729 18:27:27.223653   78080 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0729 18:27:27.223672   78080 kubeadm.go:934] updating node { 192.168.50.70 8443 v1.20.0 crio true true} ...
	I0729 18:27:27.223785   78080 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386663 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:27.223866   78080 ssh_runner.go:195] Run: crio config
	I0729 18:27:27.273186   78080 cni.go:84] Creating CNI manager for ""
	I0729 18:27:27.273207   78080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:27.273217   78080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:27.273241   78080 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.70 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386663 NodeName:old-k8s-version-386663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 18:27:27.273424   78080 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386663"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:27.273498   78080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 18:27:27.285247   78080 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:27.285327   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:27.295747   78080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 18:27:27.314192   78080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:27:27.331654   78080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 18:27:27.351717   78080 ssh_runner.go:195] Run: grep 192.168.50.70	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:27.356205   78080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:27.370446   78080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:27.509250   78080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:27.528776   78080 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663 for IP: 192.168.50.70
	I0729 18:27:27.528804   78080 certs.go:194] generating shared ca certs ...
	I0729 18:27:27.528823   78080 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:27.528991   78080 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:27.529045   78080 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:27.529061   78080 certs.go:256] generating profile certs ...
	I0729 18:27:27.529194   78080 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/client.key
	I0729 18:27:27.529308   78080 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key.71ea3f9f
	I0729 18:27:27.529364   78080 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key
	I0729 18:27:27.529529   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:27.529569   78080 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:27.529584   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:27.529614   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:27.529645   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:27.529689   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:27.529751   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:27.530573   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:27.582122   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:27.626846   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:27.663609   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:27.700294   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 18:27:27.746614   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:27.785212   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:27.834479   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:27:27.866939   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:27.892613   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:27.919059   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:27.947557   78080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:27.968625   78080 ssh_runner.go:195] Run: openssl version
	I0729 18:27:27.976500   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:27.991016   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:27.996228   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:27.996285   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:28.002529   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:28.013844   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:28.025388   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.029982   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.030042   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.036362   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:28.050134   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:28.062742   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.067240   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.067293   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.072973   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:28.084143   78080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:28.089526   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:28.096556   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:28.103044   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:28.109337   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:28.115455   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:28.121449   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:28.127395   78080 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:28.127504   78080 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:28.127581   78080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:28.176772   78080 cri.go:89] found id: ""
	I0729 18:27:28.176837   78080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:28.187955   78080 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:28.187979   78080 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:28.188034   78080 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:28.197926   78080 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:28.199364   78080 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-386663" does not appear in /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:27:28.200382   78080 kubeconfig.go:62] /home/jenkins/minikube-integration/19345-11206/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-386663" cluster setting kubeconfig missing "old-k8s-version-386663" context setting]
	I0729 18:27:28.201737   78080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:28.287712   78080 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:28.300675   78080 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.70
	I0729 18:27:28.300716   78080 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:28.300728   78080 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:28.300795   78080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:28.343880   78080 cri.go:89] found id: ""
	I0729 18:27:28.343962   78080 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:28.362391   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:28.372805   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:28.372830   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:28.372882   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:27:28.383540   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:28.383629   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:28.396564   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:27:28.409151   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:28.409208   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:28.422243   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:27:28.434736   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:28.434839   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:28.447681   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:27:28.460008   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:28.460073   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:28.472647   78080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:28.484179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:28.634526   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.206575   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.449626   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.550859   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.681945   78080 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:29.682015   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:30.182098   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:30.682977   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:31.182152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:31.682468   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:32.183031   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:27.924957   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:27.925430   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:27.925461   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:27.925378   79288 retry.go:31] will retry after 1.455979038s: waiting for machine to come up
	I0729 18:27:29.383257   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:29.383769   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:29.383793   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:29.383722   79288 retry.go:31] will retry after 1.862834258s: waiting for machine to come up
	I0729 18:27:31.248806   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:31.249394   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:31.249414   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:31.249344   79288 retry.go:31] will retry after 3.203097967s: waiting for machine to come up
	I0729 18:27:32.242350   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:34.738663   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:31.043735   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:33.543152   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:32.682567   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:33.182100   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:33.682494   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.183075   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.683115   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:35.183094   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:35.683092   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:36.182173   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:36.682843   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:37.182324   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.453552   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:34.453906   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:34.453930   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:34.453852   79288 retry.go:31] will retry after 3.166208105s: waiting for machine to come up
	I0729 18:27:36.739239   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:38.740812   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:35.543428   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:38.042603   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:37.622330   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.622738   77394 main.go:141] libmachine: (no-preload-888056) Found IP for machine: 192.168.72.80
	I0729 18:27:37.622767   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has current primary IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.622779   77394 main.go:141] libmachine: (no-preload-888056) Reserving static IP address...
	I0729 18:27:37.623108   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "no-preload-888056", mac: "52:54:00:b2:b0:1a", ip: "192.168.72.80"} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.623144   77394 main.go:141] libmachine: (no-preload-888056) DBG | skip adding static IP to network mk-no-preload-888056 - found existing host DHCP lease matching {name: "no-preload-888056", mac: "52:54:00:b2:b0:1a", ip: "192.168.72.80"}
	I0729 18:27:37.623160   77394 main.go:141] libmachine: (no-preload-888056) Reserved static IP address: 192.168.72.80
	I0729 18:27:37.623174   77394 main.go:141] libmachine: (no-preload-888056) Waiting for SSH to be available...
	I0729 18:27:37.623183   77394 main.go:141] libmachine: (no-preload-888056) DBG | Getting to WaitForSSH function...
	I0729 18:27:37.625391   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.625732   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.625759   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.625927   77394 main.go:141] libmachine: (no-preload-888056) DBG | Using SSH client type: external
	I0729 18:27:37.625948   77394 main.go:141] libmachine: (no-preload-888056) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa (-rw-------)
	I0729 18:27:37.625994   77394 main.go:141] libmachine: (no-preload-888056) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:27:37.626008   77394 main.go:141] libmachine: (no-preload-888056) DBG | About to run SSH command:
	I0729 18:27:37.626020   77394 main.go:141] libmachine: (no-preload-888056) DBG | exit 0
	I0729 18:27:37.750587   77394 main.go:141] libmachine: (no-preload-888056) DBG | SSH cmd err, output: <nil>: 
	I0729 18:27:37.750986   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetConfigRaw
	I0729 18:27:37.751717   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:37.754387   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.754753   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.754781   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.754995   77394 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/config.json ...
	I0729 18:27:37.755184   77394 machine.go:94] provisionDockerMachine start ...
	I0729 18:27:37.755207   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:37.755397   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.757649   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.757965   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.757988   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.758128   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.758297   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.758463   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.758599   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.758754   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.758918   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.758927   77394 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:27:37.862940   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:27:37.862976   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:37.863205   77394 buildroot.go:166] provisioning hostname "no-preload-888056"
	I0729 18:27:37.863234   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:37.863425   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.866190   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.866538   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.866565   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.866705   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.866878   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.867046   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.867166   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.867307   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.867478   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.867490   77394 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-888056 && echo "no-preload-888056" | sudo tee /etc/hostname
	I0729 18:27:37.985031   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-888056
	
	I0729 18:27:37.985070   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.987577   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.987917   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.987945   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.988126   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.988311   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.988469   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.988601   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.988786   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.988994   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.989012   77394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-888056' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-888056/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-888056' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:27:38.103831   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:27:38.103853   77394 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:27:38.103870   77394 buildroot.go:174] setting up certificates
	I0729 18:27:38.103878   77394 provision.go:84] configureAuth start
	I0729 18:27:38.103886   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:38.104166   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:38.107080   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.107493   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.107521   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.107690   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.110087   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.110495   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.110520   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.110738   77394 provision.go:143] copyHostCerts
	I0729 18:27:38.110793   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:27:38.110802   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:27:38.110853   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:27:38.110968   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:27:38.110978   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:27:38.110998   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:27:38.111056   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:27:38.111063   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:27:38.111080   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:27:38.111149   77394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.no-preload-888056 san=[127.0.0.1 192.168.72.80 localhost minikube no-preload-888056]
	I0729 18:27:38.327305   77394 provision.go:177] copyRemoteCerts
	I0729 18:27:38.327378   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:27:38.327407   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.330008   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.330304   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.330327   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.330516   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.330739   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.330908   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.331071   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:38.414678   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:27:38.443418   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:27:38.469248   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 18:27:38.494014   77394 provision.go:87] duration metric: took 390.106553ms to configureAuth
	I0729 18:27:38.494049   77394 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:27:38.494245   77394 config.go:182] Loaded profile config "no-preload-888056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:27:38.494357   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.497162   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.497586   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.497620   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.497946   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.498137   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.498328   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.498566   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.498766   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:38.498940   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:38.498955   77394 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:27:38.762438   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:27:38.762462   77394 machine.go:97] duration metric: took 1.007266999s to provisionDockerMachine
	I0729 18:27:38.762473   77394 start.go:293] postStartSetup for "no-preload-888056" (driver="kvm2")
	I0729 18:27:38.762484   77394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:27:38.762511   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:38.762797   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:27:38.762832   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.765677   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.766031   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.766054   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.766222   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.766432   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.766621   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.766774   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:38.854492   77394 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:27:38.858934   77394 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:27:38.858962   77394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:27:38.859041   77394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:27:38.859136   77394 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:27:38.859251   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:27:38.869459   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:38.894422   77394 start.go:296] duration metric: took 131.935433ms for postStartSetup
	I0729 18:27:38.894466   77394 fix.go:56] duration metric: took 19.034987866s for fixHost
	I0729 18:27:38.894492   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.897266   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.897654   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.897684   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.897890   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.898102   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.898250   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.898356   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.898547   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:38.898721   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:38.898732   77394 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:27:39.003526   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277658.970659996
	
	I0729 18:27:39.003571   77394 fix.go:216] guest clock: 1722277658.970659996
	I0729 18:27:39.003581   77394 fix.go:229] Guest: 2024-07-29 18:27:38.970659996 +0000 UTC Remote: 2024-07-29 18:27:38.8944731 +0000 UTC m=+356.533366653 (delta=76.186896ms)
	I0729 18:27:39.003600   77394 fix.go:200] guest clock delta is within tolerance: 76.186896ms
	I0729 18:27:39.003605   77394 start.go:83] releasing machines lock for "no-preload-888056", held for 19.144159359s
	I0729 18:27:39.003622   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.003881   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:39.006550   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.006850   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.006886   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.007005   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007597   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007779   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007879   77394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:27:39.007939   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:39.008001   77394 ssh_runner.go:195] Run: cat /version.json
	I0729 18:27:39.008026   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:39.010634   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.010941   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.010965   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.010984   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.011257   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:39.011442   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.011474   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.011487   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:39.011632   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:39.011678   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:39.011782   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:39.011951   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:39.011985   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:39.012094   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:39.114446   77394 ssh_runner.go:195] Run: systemctl --version
	I0729 18:27:39.120848   77394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:39.266976   77394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:39.273603   77394 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:39.273670   77394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:39.295511   77394 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:39.295533   77394 start.go:495] detecting cgroup driver to use...
	I0729 18:27:39.295593   77394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:39.313692   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:39.328435   77394 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:39.328502   77394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:39.342580   77394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:39.356694   77394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:39.474555   77394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:39.632766   77394 docker.go:233] disabling docker service ...
	I0729 18:27:39.632827   77394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:39.648961   77394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:39.663277   77394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:39.813329   77394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:39.944017   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:39.957624   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:39.976348   77394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 18:27:39.976401   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:39.986672   77394 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:39.986735   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:39.996867   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.007547   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.018141   77394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:40.029258   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.040007   77394 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.057611   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.068107   77394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:40.077798   77394 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:40.077877   77394 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:40.091040   77394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:40.100846   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:40.227049   77394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:40.368213   77394 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:40.368295   77394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:40.374168   77394 start.go:563] Will wait 60s for crictl version
	I0729 18:27:40.374239   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.378268   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:40.422500   77394 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:40.422579   77394 ssh_runner.go:195] Run: crio --version
	I0729 18:27:40.451170   77394 ssh_runner.go:195] Run: crio --version
	I0729 18:27:40.481789   77394 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 18:27:37.682180   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:38.182453   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:38.682639   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:39.182874   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:39.682496   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.182727   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.683073   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:41.182060   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:41.682421   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:42.182813   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.483209   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:40.486303   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:40.486738   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:40.486768   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:40.487032   77394 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:40.491318   77394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:40.505196   77394 kubeadm.go:883] updating cluster {Name:no-preload-888056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:40.505303   77394 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 18:27:40.505333   77394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:40.541356   77394 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 18:27:40.541380   77394 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:27:40.541445   77394 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.541452   77394 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.541465   77394 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.541495   77394 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.541503   77394 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.541527   77394 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.541583   77394 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.542060   77394 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 18:27:40.543507   77394 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.543519   77394 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.543505   77394 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 18:27:40.543535   77394 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.543504   77394 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.543761   77394 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.543799   77394 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.543999   77394 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.693026   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.709057   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.715664   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.720337   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.746126   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 18:27:40.748805   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.759200   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.768613   77394 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 18:27:40.768659   77394 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.768705   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.812940   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.852143   77394 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 18:27:40.852173   77394 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 18:27:40.852191   77394 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.852206   77394 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.852237   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.852249   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.890477   77394 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 18:27:40.890521   77394 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.890566   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991390   77394 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 18:27:40.991435   77394 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.991462   77394 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 18:27:40.991486   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991501   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.991508   77394 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.991548   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991556   77394 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 18:27:40.991579   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.991595   77394 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.991609   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.991654   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991694   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:41.087626   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:41.087736   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:41.087742   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:41.087782   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:41.087819   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 18:27:41.087830   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:41.087883   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.091774   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 18:27:41.091828   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:41.091858   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:41.091873   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:41.104679   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 18:27:41.104702   77394 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.104733   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 18:27:41.104750   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.155992   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 18:27:41.156114   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 18:27:41.156227   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:41.169410   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 18:27:41.169535   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:41.176103   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:41.176116   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 18:27:41.176214   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:41.241044   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:43.739887   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:40.543004   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:43.044338   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:42.682911   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:43.182279   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:43.682506   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.182109   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.682593   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:45.183002   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:45.682275   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:46.182491   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:46.683027   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:47.182311   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.874768   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.769989933s)
	I0729 18:27:44.874798   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 18:27:44.874827   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:44.874861   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (3.71860957s)
	I0729 18:27:44.874894   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 18:27:44.874906   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:44.874930   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.705380577s)
	I0729 18:27:44.874947   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 18:27:44.874972   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (3.698734733s)
	I0729 18:27:44.875001   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 18:27:46.333065   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.458135446s)
	I0729 18:27:46.333109   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 18:27:46.333137   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:46.333175   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:45.739935   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.740654   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:45.542272   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.543683   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.682979   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.183024   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.682708   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:49.182427   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:49.682335   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:50.182146   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:50.682716   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:51.182231   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:51.683106   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:52.182739   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.194389   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.861190748s)
	I0729 18:27:48.194419   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 18:27:48.194443   77394 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:48.194483   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:50.159353   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.964849018s)
	I0729 18:27:50.159384   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 18:27:50.159427   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:50.159494   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:52.256998   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.097482067s)
	I0729 18:27:52.257038   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 18:27:52.257075   77394 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:52.257125   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:50.239878   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.740167   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:50.042299   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.042567   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:54.043462   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.682628   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:53.182081   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:53.682919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:54.183194   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:54.682506   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:55.182992   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:55.682152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:56.183083   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:56.682897   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.182789   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:52.899503   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 18:27:52.899539   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:52.899594   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:54.868011   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.968389841s)
	I0729 18:27:54.868043   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 18:27:54.868075   77394 cache_images.go:123] Successfully loaded all cached images
	I0729 18:27:54.868080   77394 cache_images.go:92] duration metric: took 14.326689217s to LoadCachedImages
	I0729 18:27:54.868088   77394 kubeadm.go:934] updating node { 192.168.72.80 8443 v1.31.0-beta.0 crio true true} ...
	I0729 18:27:54.868226   77394 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-888056 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:54.868305   77394 ssh_runner.go:195] Run: crio config
	I0729 18:27:54.928569   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:27:54.928591   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:54.928604   77394 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:54.928633   77394 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.80 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-888056 NodeName:no-preload-888056 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:27:54.928800   77394 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-888056"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:54.928871   77394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 18:27:54.939479   77394 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:54.939534   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:54.948928   77394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0729 18:27:54.966700   77394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 18:27:54.984218   77394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0729 18:27:55.000813   77394 ssh_runner.go:195] Run: grep 192.168.72.80	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:55.004529   77394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:55.016140   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:55.141053   77394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:55.158874   77394 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056 for IP: 192.168.72.80
	I0729 18:27:55.158897   77394 certs.go:194] generating shared ca certs ...
	I0729 18:27:55.158918   77394 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:55.159074   77394 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:55.159136   77394 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:55.159150   77394 certs.go:256] generating profile certs ...
	I0729 18:27:55.159245   77394 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/client.key
	I0729 18:27:55.159320   77394 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.key.f09a151f
	I0729 18:27:55.159373   77394 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.key
	I0729 18:27:55.159511   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:55.159552   77394 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:55.159566   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:55.159600   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:55.159641   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:55.159680   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:55.159734   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:55.160575   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:55.211823   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:55.248637   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:55.287972   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:55.317920   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 18:27:55.346034   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:55.377569   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:55.402593   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 18:27:55.427969   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:55.452060   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:55.476635   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:55.500831   77394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:55.518744   77394 ssh_runner.go:195] Run: openssl version
	I0729 18:27:55.524865   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:55.536601   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.541752   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.541807   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.548070   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:55.559866   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:55.571833   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.576304   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.576342   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.582204   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:55.594531   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:55.605773   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.610585   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.610633   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.616478   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:55.628160   77394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:55.632691   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:55.638793   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:55.644678   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:55.651117   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:55.657397   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:55.663351   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:55.670080   77394 kubeadm.go:392] StartCluster: {Name:no-preload-888056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:55.670183   77394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:55.670248   77394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:55.712280   77394 cri.go:89] found id: ""
	I0729 18:27:55.712343   77394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:55.722878   77394 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:55.722898   77394 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:55.722935   77394 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:55.732704   77394 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:55.733646   77394 kubeconfig.go:125] found "no-preload-888056" server: "https://192.168.72.80:8443"
	I0729 18:27:55.736512   77394 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:55.748360   77394 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.80
	I0729 18:27:55.748403   77394 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:55.748416   77394 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:55.748464   77394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:55.789773   77394 cri.go:89] found id: ""
	I0729 18:27:55.789854   77394 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:55.808905   77394 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:55.819969   77394 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:55.819991   77394 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:55.820064   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:27:55.829392   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:55.829445   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:55.838934   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:27:55.848659   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:55.848720   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:55.859490   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:27:55.870024   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:55.870076   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:55.881599   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:27:55.891805   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:55.891869   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:55.901750   77394 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:55.911525   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:56.021031   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.075545   77394 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.054482988s)
	I0729 18:27:57.075571   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.302701   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.382837   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:55.261397   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:57.738688   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:59.739828   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:56.543870   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:59.043285   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:57.682237   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.182211   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.682456   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:59.182669   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:59.682863   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:00.182261   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:00.682993   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:01.182832   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:01.682899   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:02.182765   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.492480   77394 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:57.492580   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.993240   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.492965   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.517442   77394 api_server.go:72] duration metric: took 1.024961129s to wait for apiserver process to appear ...
	I0729 18:27:58.517479   77394 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:27:58.517505   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:27:58.518046   77394 api_server.go:269] stopped: https://192.168.72.80:8443/healthz: Get "https://192.168.72.80:8443/healthz": dial tcp 192.168.72.80:8443: connect: connection refused
	I0729 18:27:59.017614   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.088238   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:28:02.088265   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:28:02.088277   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.147855   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:28:02.147882   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:28:02.518439   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.525213   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:28:02.525247   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:28:03.018275   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:03.024993   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:28:03.025023   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:28:03.517564   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:03.523409   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 200:
	ok
	I0729 18:28:03.529656   77394 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 18:28:03.529687   77394 api_server.go:131] duration metric: took 5.01219984s to wait for apiserver health ...
	I0729 18:28:03.529698   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:28:03.529706   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:28:03.531527   77394 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:28:01.740935   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:03.743806   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:01.043882   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:03.542540   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:02.682331   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.182154   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.682499   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:04.182355   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:04.682338   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:05.182107   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:05.683125   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:06.182481   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:06.683153   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:07.182992   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.532788   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:28:03.544878   77394 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:28:03.586100   77394 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:28:03.604975   77394 system_pods.go:59] 8 kube-system pods found
	I0729 18:28:03.605012   77394 system_pods.go:61] "coredns-5cfdc65f69-bg5j4" [7a26ffbb-014c-4cf7-b302-214cf78374bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:28:03.605022   77394 system_pods.go:61] "etcd-no-preload-888056" [d76f2eb7-67d9-4ba0-8d2f-acfc78559651] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:28:03.605036   77394 system_pods.go:61] "kube-apiserver-no-preload-888056" [1dbea0ee-58be-47ca-b4ab-94065413768d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:28:03.605044   77394 system_pods.go:61] "kube-controller-manager-no-preload-888056" [fb8ce9d9-2953-4b91-8734-87bd38a63eb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:28:03.605051   77394 system_pods.go:61] "kube-proxy-w5z2f" [2425da76-cf2d-41c9-b8db-1370ab5333c5] Running
	I0729 18:28:03.605059   77394 system_pods.go:61] "kube-scheduler-no-preload-888056" [9958567f-116d-4094-9e7e-6208f7358486] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:28:03.605066   77394 system_pods.go:61] "metrics-server-78fcd8795b-jcdcw" [c506a5f8-d569-4c3d-9b6e-21b9fc63a86a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:28:03.605073   77394 system_pods.go:61] "storage-provisioner" [ccbc4fa6-1237-46ca-ac80-34972b9a43df] Running
	I0729 18:28:03.605082   77394 system_pods.go:74] duration metric: took 18.959807ms to wait for pod list to return data ...
	I0729 18:28:03.605095   77394 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:28:03.609225   77394 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:28:03.609249   77394 node_conditions.go:123] node cpu capacity is 2
	I0729 18:28:03.609261   77394 node_conditions.go:105] duration metric: took 4.16099ms to run NodePressure ...
	I0729 18:28:03.609278   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:28:03.881440   77394 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:28:03.886401   77394 kubeadm.go:739] kubelet initialised
	I0729 18:28:03.886429   77394 kubeadm.go:740] duration metric: took 4.958282ms waiting for restarted kubelet to initialise ...
	I0729 18:28:03.886440   77394 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:28:03.891373   77394 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:05.900595   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:06.239029   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:08.240309   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:06.042541   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:08.043322   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:07.682582   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.182094   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.682613   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:09.182936   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:09.682444   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:10.182354   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:10.682183   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:11.182502   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:11.682466   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:12.182113   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.397084   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.399546   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.897981   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:10.898006   77394 pod_ready.go:81] duration metric: took 7.006606905s for pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.898014   77394 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.903064   77394 pod_ready.go:92] pod "etcd-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:10.903088   77394 pod_ready.go:81] duration metric: took 5.066249ms for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.903099   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:11.409319   77394 pod_ready.go:92] pod "kube-apiserver-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:11.409344   77394 pod_ready.go:81] duration metric: took 506.238678ms for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:11.409353   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.250001   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:12.741099   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.542146   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:13.042422   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:12.682526   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.183014   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.682449   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:14.182138   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:14.683065   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:15.182838   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:15.682680   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:16.182714   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:16.682116   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:17.182842   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.415469   77394 pod_ready.go:102] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:13.917111   77394 pod_ready.go:92] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.917134   77394 pod_ready.go:81] duration metric: took 2.507774546s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.917149   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w5z2f" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.922045   77394 pod_ready.go:92] pod "kube-proxy-w5z2f" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.922069   77394 pod_ready.go:81] duration metric: took 4.912892ms for pod "kube-proxy-w5z2f" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.922080   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.927633   77394 pod_ready.go:92] pod "kube-scheduler-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.927654   77394 pod_ready.go:81] duration metric: took 5.565409ms for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.927666   77394 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:15.934081   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:15.240105   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.740031   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:19.740077   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:15.042540   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.043335   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:19.542061   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.683114   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:18.182919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:18.683103   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:19.182074   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:19.683031   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:20.182701   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:20.682749   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:21.182949   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:21.683001   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:22.182167   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:17.935797   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:20.434416   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:21.740735   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.238828   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:21.544060   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.042058   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:22.682723   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:23.182510   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:23.683084   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:24.182220   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:24.682699   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:25.182288   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:25.682433   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:26.182919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:26.682851   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:27.182225   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:22.435465   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.935088   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:26.239694   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:28.240174   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:26.542381   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:29.043706   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:27.682408   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:28.182187   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:28.683034   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:29.182922   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:29.682990   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:29.683063   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:29.730368   78080 cri.go:89] found id: ""
	I0729 18:28:29.730405   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.730413   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:29.730419   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:29.730473   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:29.770368   78080 cri.go:89] found id: ""
	I0729 18:28:29.770398   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.770409   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:29.770426   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:29.770479   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:29.809873   78080 cri.go:89] found id: ""
	I0729 18:28:29.809898   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.809906   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:29.809911   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:29.809970   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:29.848980   78080 cri.go:89] found id: ""
	I0729 18:28:29.849006   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.849016   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:29.849023   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:29.849082   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:29.887261   78080 cri.go:89] found id: ""
	I0729 18:28:29.887292   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.887302   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:29.887311   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:29.887388   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:29.927011   78080 cri.go:89] found id: ""
	I0729 18:28:29.927041   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.927051   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:29.927058   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:29.927122   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:29.965577   78080 cri.go:89] found id: ""
	I0729 18:28:29.965609   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.965619   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:29.965625   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:29.965693   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:29.999180   78080 cri.go:89] found id: ""
	I0729 18:28:29.999210   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.999222   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:29.999233   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:29.999253   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:30.049401   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:30.049433   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:30.063903   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:30.063939   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:30.194776   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:30.194797   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:30.194812   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:30.261861   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:30.261906   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:27.434837   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:29.435257   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:31.435297   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:30.738940   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:32.740748   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:31.542494   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:33.542872   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:32.801821   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:32.814741   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:32.814815   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:32.853490   78080 cri.go:89] found id: ""
	I0729 18:28:32.853514   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.853522   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:32.853530   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:32.853580   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:32.890314   78080 cri.go:89] found id: ""
	I0729 18:28:32.890339   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.890349   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:32.890356   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:32.890435   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:32.928231   78080 cri.go:89] found id: ""
	I0729 18:28:32.928255   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.928262   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:32.928268   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:32.928314   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:32.964024   78080 cri.go:89] found id: ""
	I0729 18:28:32.964054   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.964065   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:32.964072   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:32.964136   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:33.002099   78080 cri.go:89] found id: ""
	I0729 18:28:33.002127   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.002140   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:33.002146   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:33.002195   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:33.042238   78080 cri.go:89] found id: ""
	I0729 18:28:33.042265   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.042273   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:33.042278   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:33.042331   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:33.078715   78080 cri.go:89] found id: ""
	I0729 18:28:33.078741   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.078750   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:33.078756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:33.078816   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:33.123304   78080 cri.go:89] found id: ""
	I0729 18:28:33.123334   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.123342   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:33.123351   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:33.123366   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:33.198950   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:33.198994   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:33.223566   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:33.223594   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:33.306500   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:33.306526   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:33.306541   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:33.379386   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:33.379421   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:35.926834   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:35.942218   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:35.942296   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:35.980115   78080 cri.go:89] found id: ""
	I0729 18:28:35.980142   78080 logs.go:276] 0 containers: []
	W0729 18:28:35.980153   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:35.980159   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:35.980221   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:36.015354   78080 cri.go:89] found id: ""
	I0729 18:28:36.015379   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.015387   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:36.015392   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:36.015456   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:36.056411   78080 cri.go:89] found id: ""
	I0729 18:28:36.056435   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.056445   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:36.056451   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:36.056499   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:36.099153   78080 cri.go:89] found id: ""
	I0729 18:28:36.099180   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.099188   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:36.099193   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:36.099241   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:36.133427   78080 cri.go:89] found id: ""
	I0729 18:28:36.133459   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.133470   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:36.133477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:36.133544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:36.168619   78080 cri.go:89] found id: ""
	I0729 18:28:36.168646   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.168657   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:36.168664   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:36.168723   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:36.203636   78080 cri.go:89] found id: ""
	I0729 18:28:36.203666   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.203676   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:36.203684   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:36.203747   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:36.246495   78080 cri.go:89] found id: ""
	I0729 18:28:36.246523   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.246533   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:36.246544   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:36.246561   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:36.260630   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:36.260656   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:36.337406   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:36.337424   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:36.337435   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:36.410016   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:36.410049   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:36.453458   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:36.453492   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:33.435859   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.934955   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.240070   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:37.739406   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.740035   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.543153   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:37.543467   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.543573   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.004147   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:39.018217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:39.018279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:39.054130   78080 cri.go:89] found id: ""
	I0729 18:28:39.054155   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.054166   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:39.054172   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:39.054219   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:39.090458   78080 cri.go:89] found id: ""
	I0729 18:28:39.090482   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.090490   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:39.090501   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:39.090548   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:39.126933   78080 cri.go:89] found id: ""
	I0729 18:28:39.126960   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.126971   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:39.126978   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:39.127042   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:39.162324   78080 cri.go:89] found id: ""
	I0729 18:28:39.162352   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.162381   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:39.162389   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:39.162450   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:39.202440   78080 cri.go:89] found id: ""
	I0729 18:28:39.202464   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.202471   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:39.202477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:39.202537   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:39.238314   78080 cri.go:89] found id: ""
	I0729 18:28:39.238342   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.238352   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:39.238368   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:39.238436   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:39.275545   78080 cri.go:89] found id: ""
	I0729 18:28:39.275584   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.275592   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:39.275598   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:39.275663   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:39.311575   78080 cri.go:89] found id: ""
	I0729 18:28:39.311603   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.311614   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:39.311624   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:39.311643   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:39.367667   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:39.367711   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:39.381823   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:39.381852   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:39.456060   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:39.456083   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:39.456100   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:39.531747   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:39.531784   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:42.077771   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:42.092424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:42.092512   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:42.128710   78080 cri.go:89] found id: ""
	I0729 18:28:42.128744   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.128756   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:42.128765   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:42.128834   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:42.166092   78080 cri.go:89] found id: ""
	I0729 18:28:42.166126   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.166133   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:42.166138   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:42.166186   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:42.200955   78080 cri.go:89] found id: ""
	I0729 18:28:42.200981   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.200989   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:42.200994   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:42.201053   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:38.435476   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:40.935166   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:42.240354   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:44.739322   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:41.543640   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:43.543781   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:42.240176   78080 cri.go:89] found id: ""
	I0729 18:28:42.240203   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.240212   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:42.240219   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:42.240279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:42.279844   78080 cri.go:89] found id: ""
	I0729 18:28:42.279872   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.279880   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:42.279885   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:42.279946   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:42.313071   78080 cri.go:89] found id: ""
	I0729 18:28:42.313099   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.313108   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:42.313114   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:42.313187   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:42.348540   78080 cri.go:89] found id: ""
	I0729 18:28:42.348566   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.348573   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:42.348580   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:42.348630   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:42.384688   78080 cri.go:89] found id: ""
	I0729 18:28:42.384714   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.384725   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:42.384736   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:42.384750   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:42.399178   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:42.399206   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:42.472903   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:42.472921   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:42.472937   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:42.558541   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:42.558573   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:42.599403   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:42.599432   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:45.154026   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:45.167130   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:45.167200   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:45.203627   78080 cri.go:89] found id: ""
	I0729 18:28:45.203654   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.203663   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:45.203668   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:45.203714   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:45.242293   78080 cri.go:89] found id: ""
	I0729 18:28:45.242316   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.242325   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:45.242332   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:45.242403   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:45.282253   78080 cri.go:89] found id: ""
	I0729 18:28:45.282275   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.282282   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:45.282288   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:45.282335   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:45.320151   78080 cri.go:89] found id: ""
	I0729 18:28:45.320175   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.320183   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:45.320189   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:45.320250   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:45.356210   78080 cri.go:89] found id: ""
	I0729 18:28:45.356236   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.356247   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:45.356254   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:45.356316   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:45.393083   78080 cri.go:89] found id: ""
	I0729 18:28:45.393116   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.393131   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:45.393139   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:45.393199   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:45.430235   78080 cri.go:89] found id: ""
	I0729 18:28:45.430263   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.430274   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:45.430282   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:45.430346   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:45.463068   78080 cri.go:89] found id: ""
	I0729 18:28:45.463132   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.463143   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:45.463155   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:45.463203   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:45.541411   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:45.541441   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:45.581967   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:45.582001   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:45.639427   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:45.639459   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:45.655715   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:45.655741   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:45.725820   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:42.943815   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:45.435444   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:46.739873   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:49.240293   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:46.042576   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:48.042735   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:48.226252   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:48.240419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:48.240494   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:48.271506   78080 cri.go:89] found id: ""
	I0729 18:28:48.271538   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.271550   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:48.271557   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:48.271615   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:48.305163   78080 cri.go:89] found id: ""
	I0729 18:28:48.305186   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.305198   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:48.305203   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:48.305252   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:48.336453   78080 cri.go:89] found id: ""
	I0729 18:28:48.336480   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.336492   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:48.336500   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:48.336557   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:48.368690   78080 cri.go:89] found id: ""
	I0729 18:28:48.368713   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.368720   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:48.368725   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:48.368784   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:48.401723   78080 cri.go:89] found id: ""
	I0729 18:28:48.401746   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.401753   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:48.401758   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:48.401822   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:48.439876   78080 cri.go:89] found id: ""
	I0729 18:28:48.439896   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.439903   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:48.439908   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:48.439956   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:48.473352   78080 cri.go:89] found id: ""
	I0729 18:28:48.473383   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.473394   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:48.473401   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:48.473461   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:48.506752   78080 cri.go:89] found id: ""
	I0729 18:28:48.506779   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.506788   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:48.506799   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:48.506815   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:48.547513   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:48.547535   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:48.599704   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:48.599733   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:48.613577   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:48.613604   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:48.681272   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:48.681290   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:48.681301   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:51.267397   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:51.280243   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:51.280317   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:51.314047   78080 cri.go:89] found id: ""
	I0729 18:28:51.314078   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.314090   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:51.314097   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:51.314162   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:51.346048   78080 cri.go:89] found id: ""
	I0729 18:28:51.346073   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.346080   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:51.346085   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:51.346144   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:51.380511   78080 cri.go:89] found id: ""
	I0729 18:28:51.380543   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.380553   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:51.380561   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:51.380637   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:51.415189   78080 cri.go:89] found id: ""
	I0729 18:28:51.415213   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.415220   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:51.415227   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:51.415310   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:51.454324   78080 cri.go:89] found id: ""
	I0729 18:28:51.454351   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.454380   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:51.454388   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:51.454449   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:51.488737   78080 cri.go:89] found id: ""
	I0729 18:28:51.488768   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.488779   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:51.488787   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:51.488848   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:51.528869   78080 cri.go:89] found id: ""
	I0729 18:28:51.528903   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.528912   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:51.528920   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:51.528972   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:51.566039   78080 cri.go:89] found id: ""
	I0729 18:28:51.566067   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.566075   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:51.566086   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:51.566102   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:51.604746   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:51.604774   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:51.661048   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:51.661089   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:51.675420   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:51.675447   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:51.754496   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:51.754531   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:51.754548   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:47.934575   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:49.935187   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:51.247773   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:53.740386   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:50.043378   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:52.543104   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:54.335796   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:54.350726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:54.350784   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:54.389661   78080 cri.go:89] found id: ""
	I0729 18:28:54.389683   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.389694   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:54.389701   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:54.389761   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:54.427073   78080 cri.go:89] found id: ""
	I0729 18:28:54.427100   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.427110   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:54.427117   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:54.427178   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:54.466761   78080 cri.go:89] found id: ""
	I0729 18:28:54.466793   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.466802   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:54.466808   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:54.466871   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:54.501115   78080 cri.go:89] found id: ""
	I0729 18:28:54.501144   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.501159   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:54.501167   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:54.501229   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:54.535430   78080 cri.go:89] found id: ""
	I0729 18:28:54.535461   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.535472   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:54.535480   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:54.535543   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:54.574994   78080 cri.go:89] found id: ""
	I0729 18:28:54.575024   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.575034   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:54.575041   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:54.575107   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:54.608770   78080 cri.go:89] found id: ""
	I0729 18:28:54.608792   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.608800   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:54.608805   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:54.608850   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:54.648026   78080 cri.go:89] found id: ""
	I0729 18:28:54.648050   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.648057   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:54.648066   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:54.648077   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:54.728445   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:54.728485   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:54.774752   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:54.774781   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:54.826549   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:54.826582   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:54.840366   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:54.840394   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:54.907422   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:52.434956   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:54.436125   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:56.933929   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:56.239045   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:58.239967   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:55.041898   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:57.042968   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:59.542837   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:57.408469   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:57.421855   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:57.421923   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:57.457794   78080 cri.go:89] found id: ""
	I0729 18:28:57.457816   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.457824   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:57.457829   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:57.457908   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:57.492851   78080 cri.go:89] found id: ""
	I0729 18:28:57.492880   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.492888   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:57.492894   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:57.492946   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:57.528221   78080 cri.go:89] found id: ""
	I0729 18:28:57.528249   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.528258   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:57.528265   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:57.528330   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:57.565504   78080 cri.go:89] found id: ""
	I0729 18:28:57.565536   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.565547   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:57.565554   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:57.565618   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:57.599391   78080 cri.go:89] found id: ""
	I0729 18:28:57.599418   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.599426   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:57.599432   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:57.599491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:57.643757   78080 cri.go:89] found id: ""
	I0729 18:28:57.643784   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.643798   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:57.643806   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:57.643867   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:57.680825   78080 cri.go:89] found id: ""
	I0729 18:28:57.680853   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.680864   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:57.680871   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:57.680936   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:57.714450   78080 cri.go:89] found id: ""
	I0729 18:28:57.714479   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.714490   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:57.714500   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:57.714516   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:57.798411   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:57.798437   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:57.798453   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:57.878210   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:57.878246   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:57.917476   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:57.917505   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:57.971395   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:57.971432   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:00.486419   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:00.500625   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:00.500703   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:00.539625   78080 cri.go:89] found id: ""
	I0729 18:29:00.539650   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.539659   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:00.539682   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:00.539737   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:00.577252   78080 cri.go:89] found id: ""
	I0729 18:29:00.577284   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.577297   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:00.577303   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:00.577350   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:00.611850   78080 cri.go:89] found id: ""
	I0729 18:29:00.611878   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.611886   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:00.611892   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:00.611939   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:00.648964   78080 cri.go:89] found id: ""
	I0729 18:29:00.648989   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.648996   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:00.649003   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:00.649062   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:00.686124   78080 cri.go:89] found id: ""
	I0729 18:29:00.686147   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.686156   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:00.686161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:00.686217   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:00.721166   78080 cri.go:89] found id: ""
	I0729 18:29:00.721195   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.721205   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:00.721213   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:00.721276   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:00.758394   78080 cri.go:89] found id: ""
	I0729 18:29:00.758423   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.758431   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:00.758436   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:00.758491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:00.793487   78080 cri.go:89] found id: ""
	I0729 18:29:00.793514   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.793523   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:00.793533   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:00.793549   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:00.807069   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:00.807106   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:00.880611   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:00.880629   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:00.880641   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:00.963534   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:00.963568   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:01.004145   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:01.004174   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:58.933964   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:00.934221   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:00.739676   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:02.741020   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:02.042346   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:04.541902   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:03.560985   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:03.574407   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:03.574476   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:03.608027   78080 cri.go:89] found id: ""
	I0729 18:29:03.608048   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.608057   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:03.608062   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:03.608119   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:03.644777   78080 cri.go:89] found id: ""
	I0729 18:29:03.644804   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.644814   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:03.644821   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:03.644895   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:03.684050   78080 cri.go:89] found id: ""
	I0729 18:29:03.684074   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.684082   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:03.684089   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:03.684149   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:03.724350   78080 cri.go:89] found id: ""
	I0729 18:29:03.724376   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.724383   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:03.724390   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:03.724439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:03.766859   78080 cri.go:89] found id: ""
	I0729 18:29:03.766887   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.766898   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:03.766905   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:03.766967   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:03.800535   78080 cri.go:89] found id: ""
	I0729 18:29:03.800562   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.800572   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:03.800579   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:03.800639   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:03.834991   78080 cri.go:89] found id: ""
	I0729 18:29:03.835011   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.835019   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:03.835024   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:03.835073   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:03.869159   78080 cri.go:89] found id: ""
	I0729 18:29:03.869191   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.869201   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:03.869211   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:03.869226   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:03.940451   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:03.940469   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:03.940487   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:04.020880   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:04.020910   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:04.064707   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:04.064728   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:04.121551   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:04.121587   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:06.636983   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:06.651500   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:06.651582   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:06.686556   78080 cri.go:89] found id: ""
	I0729 18:29:06.686582   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.686592   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:06.686599   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:06.686660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:06.721967   78080 cri.go:89] found id: ""
	I0729 18:29:06.721996   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.722008   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:06.722016   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:06.722115   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:06.760409   78080 cri.go:89] found id: ""
	I0729 18:29:06.760433   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.760440   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:06.760445   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:06.760499   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:06.794050   78080 cri.go:89] found id: ""
	I0729 18:29:06.794074   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.794081   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:06.794087   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:06.794143   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:06.826445   78080 cri.go:89] found id: ""
	I0729 18:29:06.826471   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.826478   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:06.826484   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:06.826544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:06.860680   78080 cri.go:89] found id: ""
	I0729 18:29:06.860700   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.860706   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:06.860712   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:06.860761   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:06.898192   78080 cri.go:89] found id: ""
	I0729 18:29:06.898215   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.898223   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:06.898229   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:06.898284   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:06.931892   78080 cri.go:89] found id: ""
	I0729 18:29:06.931920   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.931930   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:06.931940   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:06.931955   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:06.987265   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:06.987294   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:07.043520   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:07.043547   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:07.056995   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:07.057019   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:07.124932   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:07.124956   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:07.124971   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:03.435778   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:05.936004   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:05.239352   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:07.239383   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:06.542526   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:08.543497   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:09.708947   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:09.723497   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:09.723565   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:09.762686   78080 cri.go:89] found id: ""
	I0729 18:29:09.762714   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.762725   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:09.762733   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:09.762797   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:09.799674   78080 cri.go:89] found id: ""
	I0729 18:29:09.799699   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.799708   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:09.799715   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:09.799775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:09.836121   78080 cri.go:89] found id: ""
	I0729 18:29:09.836147   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.836156   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:09.836161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:09.836209   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:09.872758   78080 cri.go:89] found id: ""
	I0729 18:29:09.872783   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.872791   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:09.872797   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:09.872842   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:09.911681   78080 cri.go:89] found id: ""
	I0729 18:29:09.911711   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.911719   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:09.911724   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:09.911773   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:09.951531   78080 cri.go:89] found id: ""
	I0729 18:29:09.951554   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.951561   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:09.951567   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:09.951624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:09.985568   78080 cri.go:89] found id: ""
	I0729 18:29:09.985597   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.985606   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:09.985612   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:09.985661   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:10.020369   78080 cri.go:89] found id: ""
	I0729 18:29:10.020394   78080 logs.go:276] 0 containers: []
	W0729 18:29:10.020402   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:10.020409   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:10.020421   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:10.076538   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:10.076574   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:10.090954   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:10.090980   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:10.165843   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:10.165875   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:10.165890   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:10.242438   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:10.242469   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:08.434575   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:10.934523   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:09.744446   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:12.239540   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:14.242060   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:10.544272   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:13.043064   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:12.781369   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:12.797066   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:12.797160   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:12.832500   78080 cri.go:89] found id: ""
	I0729 18:29:12.832528   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.832545   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:12.832552   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:12.832615   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:12.866390   78080 cri.go:89] found id: ""
	I0729 18:29:12.866420   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.866428   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:12.866434   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:12.866494   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:12.901616   78080 cri.go:89] found id: ""
	I0729 18:29:12.901636   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.901644   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:12.901649   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:12.901713   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:12.935954   78080 cri.go:89] found id: ""
	I0729 18:29:12.935976   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.935985   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:12.935993   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:12.936053   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:12.970570   78080 cri.go:89] found id: ""
	I0729 18:29:12.970623   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.970637   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:12.970645   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:12.970702   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:13.008629   78080 cri.go:89] found id: ""
	I0729 18:29:13.008658   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.008666   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:13.008672   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:13.008725   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:13.045689   78080 cri.go:89] found id: ""
	I0729 18:29:13.045713   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.045721   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:13.045726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:13.045773   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:13.084707   78080 cri.go:89] found id: ""
	I0729 18:29:13.084735   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.084745   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:13.084756   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:13.084774   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:13.161884   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:13.161920   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:13.205377   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:13.205410   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:13.258161   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:13.258189   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:13.272208   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:13.272240   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:13.347519   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:15.848068   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:15.861773   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:15.861851   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:15.902421   78080 cri.go:89] found id: ""
	I0729 18:29:15.902449   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.902458   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:15.902466   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:15.902532   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:15.939552   78080 cri.go:89] found id: ""
	I0729 18:29:15.939576   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.939583   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:15.939588   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:15.939645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:15.974424   78080 cri.go:89] found id: ""
	I0729 18:29:15.974454   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.974463   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:15.974468   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:15.974516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:16.010955   78080 cri.go:89] found id: ""
	I0729 18:29:16.010993   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.011000   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:16.011006   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:16.011062   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:16.046785   78080 cri.go:89] found id: ""
	I0729 18:29:16.046815   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.046825   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:16.046832   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:16.046887   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:16.082691   78080 cri.go:89] found id: ""
	I0729 18:29:16.082721   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.082731   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:16.082739   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:16.082796   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:16.127633   78080 cri.go:89] found id: ""
	I0729 18:29:16.127663   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.127676   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:16.127684   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:16.127741   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:16.162641   78080 cri.go:89] found id: ""
	I0729 18:29:16.162662   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.162670   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:16.162684   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:16.162695   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:16.215132   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:16.215162   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:16.229581   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:16.229607   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:16.303178   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:16.303198   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:16.303212   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:16.383739   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:16.383775   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:12.934751   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:14.934965   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:16.739047   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:18.739145   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:15.043163   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:17.544340   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:18.924292   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:18.937571   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:18.937626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:18.970523   78080 cri.go:89] found id: ""
	I0729 18:29:18.970554   78080 logs.go:276] 0 containers: []
	W0729 18:29:18.970563   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:18.970568   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:18.970624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:19.005448   78080 cri.go:89] found id: ""
	I0729 18:29:19.005471   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.005478   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:19.005483   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:19.005538   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:19.044352   78080 cri.go:89] found id: ""
	I0729 18:29:19.044377   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.044386   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:19.044393   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:19.044448   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:19.079288   78080 cri.go:89] found id: ""
	I0729 18:29:19.079317   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.079327   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:19.079333   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:19.079402   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:19.122932   78080 cri.go:89] found id: ""
	I0729 18:29:19.122954   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.122961   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:19.122967   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:19.123020   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:19.166992   78080 cri.go:89] found id: ""
	I0729 18:29:19.167018   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.167025   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:19.167031   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:19.167103   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:19.215301   78080 cri.go:89] found id: ""
	I0729 18:29:19.215331   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.215341   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:19.215355   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:19.215419   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:19.267635   78080 cri.go:89] found id: ""
	I0729 18:29:19.267657   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.267664   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:19.267671   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:19.267682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:19.319924   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:19.319962   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:19.333987   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:19.334010   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:19.406541   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:19.406558   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:19.406571   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:19.487388   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:19.487426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:22.027745   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:22.041145   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:22.041218   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:22.080000   78080 cri.go:89] found id: ""
	I0729 18:29:22.080022   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.080029   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:22.080034   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:22.080079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:22.116385   78080 cri.go:89] found id: ""
	I0729 18:29:22.116415   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.116425   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:22.116431   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:22.116492   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:22.150530   78080 cri.go:89] found id: ""
	I0729 18:29:22.150552   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.150559   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:22.150565   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:22.150621   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:22.188782   78080 cri.go:89] found id: ""
	I0729 18:29:22.188808   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.188817   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:22.188822   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:22.188873   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:17.434007   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:19.434864   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:21.935573   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:20.739852   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:23.239853   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:20.044010   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:22.542952   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:24.543614   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:22.227117   78080 cri.go:89] found id: ""
	I0729 18:29:22.227152   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.227162   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:22.227169   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:22.227234   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:22.263057   78080 cri.go:89] found id: ""
	I0729 18:29:22.263079   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.263086   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:22.263091   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:22.263145   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:22.297368   78080 cri.go:89] found id: ""
	I0729 18:29:22.297391   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.297399   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:22.297406   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:22.297466   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:22.334117   78080 cri.go:89] found id: ""
	I0729 18:29:22.334149   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.334159   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:22.334170   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:22.334184   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:22.349344   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:22.349369   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:22.415720   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:22.415743   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:22.415758   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:22.494937   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:22.494971   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:22.536352   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:22.536382   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:25.087795   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:25.103985   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:25.104050   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:25.158532   78080 cri.go:89] found id: ""
	I0729 18:29:25.158562   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.158572   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:25.158580   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:25.158641   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:25.216740   78080 cri.go:89] found id: ""
	I0729 18:29:25.216762   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.216769   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:25.216775   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:25.216827   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:25.254827   78080 cri.go:89] found id: ""
	I0729 18:29:25.254855   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.254865   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:25.254872   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:25.254934   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:25.289377   78080 cri.go:89] found id: ""
	I0729 18:29:25.289407   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.289417   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:25.289424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:25.289484   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:25.328111   78080 cri.go:89] found id: ""
	I0729 18:29:25.328144   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.328153   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:25.328161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:25.328224   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:25.364779   78080 cri.go:89] found id: ""
	I0729 18:29:25.364808   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.364815   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:25.364827   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:25.364874   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:25.402906   78080 cri.go:89] found id: ""
	I0729 18:29:25.402935   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.402942   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:25.402948   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:25.403007   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:25.438747   78080 cri.go:89] found id: ""
	I0729 18:29:25.438770   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.438778   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:25.438787   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:25.438803   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:25.452803   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:25.452829   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:25.527575   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:25.527593   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:25.527610   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:25.622437   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:25.622482   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:25.661451   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:25.661478   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:23.936249   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:26.434496   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:25.739358   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:27.739702   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:27.043125   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:29.542130   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:28.213898   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:28.230013   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:28.230071   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:28.265484   78080 cri.go:89] found id: ""
	I0729 18:29:28.265511   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.265521   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:28.265530   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:28.265594   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:28.306374   78080 cri.go:89] found id: ""
	I0729 18:29:28.306428   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.306441   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:28.306448   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:28.306501   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:28.340274   78080 cri.go:89] found id: ""
	I0729 18:29:28.340299   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.340309   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:28.340316   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:28.340379   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:28.373928   78080 cri.go:89] found id: ""
	I0729 18:29:28.373973   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.373982   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:28.373990   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:28.374052   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:28.407075   78080 cri.go:89] found id: ""
	I0729 18:29:28.407107   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.407120   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:28.407129   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:28.407215   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:28.444501   78080 cri.go:89] found id: ""
	I0729 18:29:28.444528   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.444536   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:28.444543   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:28.444614   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:28.487513   78080 cri.go:89] found id: ""
	I0729 18:29:28.487540   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.487548   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:28.487554   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:28.487611   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:28.521957   78080 cri.go:89] found id: ""
	I0729 18:29:28.521990   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.522000   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:28.522011   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:28.522027   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:28.536880   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:28.536918   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:28.609486   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:28.609513   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:28.609528   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:28.694086   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:28.694125   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:28.733930   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:28.733964   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:31.292260   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:31.305840   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:31.305899   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:31.342510   78080 cri.go:89] found id: ""
	I0729 18:29:31.342539   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.342550   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:31.342557   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:31.342613   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:31.375093   78080 cri.go:89] found id: ""
	I0729 18:29:31.375118   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.375128   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:31.375135   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:31.375198   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:31.408554   78080 cri.go:89] found id: ""
	I0729 18:29:31.408576   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.408583   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:31.408588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:31.408660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:31.448748   78080 cri.go:89] found id: ""
	I0729 18:29:31.448774   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.448783   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:31.448796   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:31.448855   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:31.483541   78080 cri.go:89] found id: ""
	I0729 18:29:31.483564   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.483572   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:31.483578   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:31.483637   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:31.518173   78080 cri.go:89] found id: ""
	I0729 18:29:31.518198   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.518209   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:31.518217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:31.518279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:31.553345   78080 cri.go:89] found id: ""
	I0729 18:29:31.553371   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.553379   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:31.553384   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:31.553439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:31.591857   78080 cri.go:89] found id: ""
	I0729 18:29:31.591887   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.591905   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:31.591916   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:31.591929   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:31.648404   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:31.648436   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:31.661455   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:31.661477   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:31.732978   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:31.732997   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:31.733009   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:31.812105   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:31.812145   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:28.435517   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:30.436822   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:30.239755   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:32.739231   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.739534   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:31.542847   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:33.543096   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.353079   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:34.366759   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:34.366817   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:34.400944   78080 cri.go:89] found id: ""
	I0729 18:29:34.400974   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.400984   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:34.400991   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:34.401055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:34.439348   78080 cri.go:89] found id: ""
	I0729 18:29:34.439373   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.439383   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:34.439395   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:34.439444   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:34.473969   78080 cri.go:89] found id: ""
	I0729 18:29:34.473991   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.474010   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:34.474017   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:34.474080   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:34.507741   78080 cri.go:89] found id: ""
	I0729 18:29:34.507770   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.507778   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:34.507784   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:34.507845   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:34.543794   78080 cri.go:89] found id: ""
	I0729 18:29:34.543815   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.543823   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:34.543830   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:34.543895   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:34.577893   78080 cri.go:89] found id: ""
	I0729 18:29:34.577918   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.577926   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:34.577931   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:34.577978   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:34.612703   78080 cri.go:89] found id: ""
	I0729 18:29:34.612735   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.612745   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:34.612752   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:34.612815   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:34.648167   78080 cri.go:89] found id: ""
	I0729 18:29:34.648197   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.648209   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:34.648219   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:34.648233   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:34.689821   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:34.689848   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:34.743902   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:34.743935   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:34.757400   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:34.757426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:34.833684   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:34.833706   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:34.833721   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:32.934207   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.936549   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:37.238618   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:39.239761   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:36.042461   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:38.543304   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:37.419270   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:37.433249   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:37.433301   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:37.469991   78080 cri.go:89] found id: ""
	I0729 18:29:37.470021   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.470031   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:37.470038   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:37.470098   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:37.504511   78080 cri.go:89] found id: ""
	I0729 18:29:37.504537   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.504548   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:37.504554   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:37.504612   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:37.545304   78080 cri.go:89] found id: ""
	I0729 18:29:37.545332   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.545342   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:37.545349   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:37.545406   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:37.584255   78080 cri.go:89] found id: ""
	I0729 18:29:37.584280   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.584287   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:37.584292   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:37.584345   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:37.620917   78080 cri.go:89] found id: ""
	I0729 18:29:37.620943   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.620951   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:37.620958   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:37.621022   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:37.659381   78080 cri.go:89] found id: ""
	I0729 18:29:37.659405   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.659414   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:37.659419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:37.659486   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:37.701337   78080 cri.go:89] found id: ""
	I0729 18:29:37.701360   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.701368   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:37.701373   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:37.701426   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:37.737142   78080 cri.go:89] found id: ""
	I0729 18:29:37.737168   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.737177   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:37.737186   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:37.737201   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:37.789951   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:37.789992   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:37.804759   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:37.804784   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:37.881777   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:37.881794   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:37.881808   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:37.970593   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:37.970625   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:40.511557   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:40.525472   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:40.525527   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:40.564227   78080 cri.go:89] found id: ""
	I0729 18:29:40.564253   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.564263   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:40.564270   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:40.564336   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:40.600384   78080 cri.go:89] found id: ""
	I0729 18:29:40.600409   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.600417   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:40.600423   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:40.600475   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:40.634819   78080 cri.go:89] found id: ""
	I0729 18:29:40.634843   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.634858   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:40.634866   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:40.634913   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:40.669963   78080 cri.go:89] found id: ""
	I0729 18:29:40.669991   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.669999   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:40.670006   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:40.670069   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:40.705680   78080 cri.go:89] found id: ""
	I0729 18:29:40.705705   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.705714   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:40.705719   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:40.705775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:40.743691   78080 cri.go:89] found id: ""
	I0729 18:29:40.743715   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.743725   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:40.743732   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:40.743820   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:40.783858   78080 cri.go:89] found id: ""
	I0729 18:29:40.783889   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.783898   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:40.783903   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:40.783953   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:40.821499   78080 cri.go:89] found id: ""
	I0729 18:29:40.821527   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.821537   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:40.821547   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:40.821562   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:40.874941   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:40.874972   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:40.888034   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:40.888057   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:40.960013   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:40.960032   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:40.960044   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:41.043013   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:41.043042   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:37.435119   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:39.435967   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:41.934232   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:41.739070   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.739497   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:40.543453   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.042528   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.583555   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:43.597120   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:43.597193   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:43.631500   78080 cri.go:89] found id: ""
	I0729 18:29:43.631526   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.631535   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:43.631542   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:43.631607   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:43.667003   78080 cri.go:89] found id: ""
	I0729 18:29:43.667029   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.667037   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:43.667042   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:43.667102   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:43.701471   78080 cri.go:89] found id: ""
	I0729 18:29:43.701502   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.701510   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:43.701515   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:43.701569   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:43.740037   78080 cri.go:89] found id: ""
	I0729 18:29:43.740058   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.740067   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:43.740074   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:43.740145   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:43.772584   78080 cri.go:89] found id: ""
	I0729 18:29:43.772610   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.772620   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:43.772626   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:43.772689   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:43.806340   78080 cri.go:89] found id: ""
	I0729 18:29:43.806382   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.806393   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:43.806401   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:43.806480   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:43.840085   78080 cri.go:89] found id: ""
	I0729 18:29:43.840109   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.840118   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:43.840133   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:43.840198   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:43.873412   78080 cri.go:89] found id: ""
	I0729 18:29:43.873438   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.873448   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:43.873458   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:43.873473   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:43.928762   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:43.928790   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:43.944129   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:43.944156   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:44.017330   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:44.017349   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:44.017361   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:44.106858   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:44.106915   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:46.651050   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:46.665253   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:46.665310   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:46.698846   78080 cri.go:89] found id: ""
	I0729 18:29:46.698871   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.698881   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:46.698888   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:46.698956   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:46.734354   78080 cri.go:89] found id: ""
	I0729 18:29:46.734395   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.734405   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:46.734413   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:46.734468   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:46.771978   78080 cri.go:89] found id: ""
	I0729 18:29:46.771999   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.772007   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:46.772012   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:46.772059   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:46.807231   78080 cri.go:89] found id: ""
	I0729 18:29:46.807255   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.807263   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:46.807272   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:46.807329   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:46.842257   78080 cri.go:89] found id: ""
	I0729 18:29:46.842278   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.842306   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:46.842312   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:46.842373   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:46.876287   78080 cri.go:89] found id: ""
	I0729 18:29:46.876309   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.876317   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:46.876323   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:46.876389   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:46.909695   78080 cri.go:89] found id: ""
	I0729 18:29:46.909719   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.909726   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:46.909731   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:46.909806   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:46.951768   78080 cri.go:89] found id: ""
	I0729 18:29:46.951798   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.951807   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:46.951815   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:46.951825   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:47.025467   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:47.025485   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:47.025497   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:47.106336   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:47.106391   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:47.145652   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:47.145682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:47.200857   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:47.200886   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:43.935210   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:46.434346   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:45.739606   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:48.240282   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:45.544442   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:48.042872   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:49.715401   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:49.729703   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:49.729776   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:49.770016   78080 cri.go:89] found id: ""
	I0729 18:29:49.770039   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.770062   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:49.770070   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:49.770127   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:49.805464   78080 cri.go:89] found id: ""
	I0729 18:29:49.805487   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.805495   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:49.805500   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:49.805560   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:49.838739   78080 cri.go:89] found id: ""
	I0729 18:29:49.838770   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.838782   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:49.838789   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:49.838861   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:49.881168   78080 cri.go:89] found id: ""
	I0729 18:29:49.881194   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.881202   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:49.881208   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:49.881269   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:49.919978   78080 cri.go:89] found id: ""
	I0729 18:29:49.919999   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.920006   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:49.920012   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:49.920079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:49.958971   78080 cri.go:89] found id: ""
	I0729 18:29:49.958996   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.959006   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:49.959013   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:49.959063   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:50.001253   78080 cri.go:89] found id: ""
	I0729 18:29:50.001281   78080 logs.go:276] 0 containers: []
	W0729 18:29:50.001291   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:50.001298   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:50.001362   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:50.038729   78080 cri.go:89] found id: ""
	I0729 18:29:50.038755   78080 logs.go:276] 0 containers: []
	W0729 18:29:50.038766   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:50.038776   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:50.038789   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:50.082540   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:50.082567   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:50.132372   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:50.132413   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:50.146806   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:50.146835   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:50.214495   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:50.214515   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:50.214532   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:48.435540   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.935475   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.240626   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.739158   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.044073   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.047924   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:54.542657   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.793987   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:52.808085   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:52.808149   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:52.844869   78080 cri.go:89] found id: ""
	I0729 18:29:52.844904   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.844917   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:52.844925   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:52.844986   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:52.878097   78080 cri.go:89] found id: ""
	I0729 18:29:52.878122   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.878135   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:52.878142   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:52.878191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:52.910843   78080 cri.go:89] found id: ""
	I0729 18:29:52.910884   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.910894   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:52.910902   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:52.910953   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:52.943233   78080 cri.go:89] found id: ""
	I0729 18:29:52.943257   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.943267   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:52.943274   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:52.943335   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:52.978354   78080 cri.go:89] found id: ""
	I0729 18:29:52.978402   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.978413   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:52.978423   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:52.978503   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:53.011238   78080 cri.go:89] found id: ""
	I0729 18:29:53.011266   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.011276   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:53.011283   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:53.011336   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:53.048787   78080 cri.go:89] found id: ""
	I0729 18:29:53.048817   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.048827   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:53.048834   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:53.048900   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:53.086108   78080 cri.go:89] found id: ""
	I0729 18:29:53.086135   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.086156   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:53.086176   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:53.086195   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:53.137552   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:53.137580   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:53.151308   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:53.151333   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:53.225968   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:53.225992   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:53.226004   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:53.308111   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:53.308145   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:55.850207   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:55.864003   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:55.864054   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:55.898109   78080 cri.go:89] found id: ""
	I0729 18:29:55.898134   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.898142   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:55.898148   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:55.898201   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:55.931616   78080 cri.go:89] found id: ""
	I0729 18:29:55.931643   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.931653   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:55.931660   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:55.931719   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:55.969034   78080 cri.go:89] found id: ""
	I0729 18:29:55.969063   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.969073   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:55.969080   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:55.969142   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:56.007552   78080 cri.go:89] found id: ""
	I0729 18:29:56.007576   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.007586   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:56.007592   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:56.007653   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:56.044342   78080 cri.go:89] found id: ""
	I0729 18:29:56.044367   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.044376   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:56.044382   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:56.044437   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:56.078352   78080 cri.go:89] found id: ""
	I0729 18:29:56.078396   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.078412   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:56.078420   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:56.078471   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:56.116505   78080 cri.go:89] found id: ""
	I0729 18:29:56.116532   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.116543   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:56.116551   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:56.116611   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:56.151493   78080 cri.go:89] found id: ""
	I0729 18:29:56.151516   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.151523   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:56.151530   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:56.151542   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:56.206170   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:56.206198   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:56.219658   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:56.219684   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:56.290279   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:56.290300   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:56.290312   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:56.371352   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:56.371382   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:53.434046   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:55.435343   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:55.239055   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:57.241032   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.740003   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:57.041745   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.042416   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:58.908793   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:58.922566   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:58.922626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:58.959375   78080 cri.go:89] found id: ""
	I0729 18:29:58.959397   78080 logs.go:276] 0 containers: []
	W0729 18:29:58.959404   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:58.959410   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:58.959459   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:58.993235   78080 cri.go:89] found id: ""
	I0729 18:29:58.993257   78080 logs.go:276] 0 containers: []
	W0729 18:29:58.993265   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:58.993271   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:58.993331   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:59.028186   78080 cri.go:89] found id: ""
	I0729 18:29:59.028212   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.028220   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:59.028225   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:59.028271   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:59.063589   78080 cri.go:89] found id: ""
	I0729 18:29:59.063619   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.063628   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:59.063635   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:59.063695   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:59.101116   78080 cri.go:89] found id: ""
	I0729 18:29:59.101142   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.101152   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:59.101158   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:59.101208   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:59.135288   78080 cri.go:89] found id: ""
	I0729 18:29:59.135314   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.135324   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:59.135332   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:59.135395   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:59.170520   78080 cri.go:89] found id: ""
	I0729 18:29:59.170549   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.170557   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:59.170562   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:59.170618   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:59.229796   78080 cri.go:89] found id: ""
	I0729 18:29:59.229825   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.229835   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:59.229843   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:59.229871   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:59.244654   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:59.244682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:59.321262   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:59.321286   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:59.321301   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:59.401423   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:59.401459   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:59.442916   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:59.442938   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:01.995116   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:02.008454   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:02.008516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:02.046412   78080 cri.go:89] found id: ""
	I0729 18:30:02.046431   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.046438   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:02.046443   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:02.046487   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:02.082444   78080 cri.go:89] found id: ""
	I0729 18:30:02.082466   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.082476   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:02.082482   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:02.082551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:02.116013   78080 cri.go:89] found id: ""
	I0729 18:30:02.116041   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.116052   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:02.116058   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:02.116127   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:02.155817   78080 cri.go:89] found id: ""
	I0729 18:30:02.155844   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.155854   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:02.155862   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:02.155914   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:02.195518   78080 cri.go:89] found id: ""
	I0729 18:30:02.195548   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.195556   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:02.195563   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:02.195624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:57.934058   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.934547   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.935238   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.742050   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:04.239758   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.043550   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:03.542544   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:02.228248   78080 cri.go:89] found id: ""
	I0729 18:30:02.228274   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.228283   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:02.228289   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:02.228370   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:02.262441   78080 cri.go:89] found id: ""
	I0729 18:30:02.262469   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.262479   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:02.262486   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:02.262546   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:02.296900   78080 cri.go:89] found id: ""
	I0729 18:30:02.296930   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.296937   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:02.296953   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:02.296965   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:02.352356   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:02.352389   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:02.366336   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:02.366365   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:02.441367   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:02.441389   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:02.441403   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:02.524134   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:02.524173   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:05.071581   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:05.085481   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:05.085535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:05.121610   78080 cri.go:89] found id: ""
	I0729 18:30:05.121636   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.121644   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:05.121652   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:05.121716   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:05.157382   78080 cri.go:89] found id: ""
	I0729 18:30:05.157406   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.157413   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:05.157418   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:05.157478   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:05.195552   78080 cri.go:89] found id: ""
	I0729 18:30:05.195582   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.195593   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:05.195600   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:05.195657   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:05.231071   78080 cri.go:89] found id: ""
	I0729 18:30:05.231095   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.231103   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:05.231108   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:05.231165   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:05.267445   78080 cri.go:89] found id: ""
	I0729 18:30:05.267474   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.267485   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:05.267493   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:05.267555   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:05.304258   78080 cri.go:89] found id: ""
	I0729 18:30:05.304279   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.304286   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:05.304291   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:05.304338   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:05.339155   78080 cri.go:89] found id: ""
	I0729 18:30:05.339176   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.339184   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:05.339190   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:05.339243   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:05.375291   78080 cri.go:89] found id: ""
	I0729 18:30:05.375328   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.375337   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:05.375346   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:05.375361   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:05.446196   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:05.446221   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:05.446236   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:05.529421   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:05.529457   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:05.570234   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:05.570269   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:05.629349   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:05.629391   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:04.434625   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:06.934246   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:06.239886   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.242421   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:05.543394   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.042242   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.151320   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:08.165983   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:08.166045   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:08.205703   78080 cri.go:89] found id: ""
	I0729 18:30:08.205726   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.205733   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:08.205738   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:08.205786   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:08.245919   78080 cri.go:89] found id: ""
	I0729 18:30:08.245946   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.245957   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:08.245964   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:08.246024   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:08.286595   78080 cri.go:89] found id: ""
	I0729 18:30:08.286621   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.286631   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:08.286638   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:08.286700   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:08.330032   78080 cri.go:89] found id: ""
	I0729 18:30:08.330060   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.330070   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:08.330077   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:08.330140   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:08.362535   78080 cri.go:89] found id: ""
	I0729 18:30:08.362567   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.362578   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:08.362586   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:08.362645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:08.397648   78080 cri.go:89] found id: ""
	I0729 18:30:08.397678   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.397688   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:08.397704   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:08.397766   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:08.433615   78080 cri.go:89] found id: ""
	I0729 18:30:08.433693   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.433716   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:08.433734   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:08.433809   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:08.465765   78080 cri.go:89] found id: ""
	I0729 18:30:08.465792   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.465803   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:08.465814   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:08.465829   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:08.536332   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:08.536360   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:08.536375   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:08.613737   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:08.613776   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:08.659707   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:08.659736   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:08.712702   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:08.712736   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:11.226660   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:11.240852   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:11.240919   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:11.277632   78080 cri.go:89] found id: ""
	I0729 18:30:11.277664   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.277675   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:11.277682   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:11.277751   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:11.312458   78080 cri.go:89] found id: ""
	I0729 18:30:11.312478   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.312485   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:11.312491   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:11.312551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:11.350375   78080 cri.go:89] found id: ""
	I0729 18:30:11.350406   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.350416   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:11.350424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:11.350486   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:11.389280   78080 cri.go:89] found id: ""
	I0729 18:30:11.389307   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.389317   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:11.389324   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:11.389382   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:11.424907   78080 cri.go:89] found id: ""
	I0729 18:30:11.424936   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.424944   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:11.424949   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:11.425009   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:11.480686   78080 cri.go:89] found id: ""
	I0729 18:30:11.480713   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.480720   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:11.480726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:11.480778   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:11.514831   78080 cri.go:89] found id: ""
	I0729 18:30:11.514857   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.514864   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:11.514870   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:11.514917   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:11.547930   78080 cri.go:89] found id: ""
	I0729 18:30:11.547955   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.547964   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:11.547974   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:11.547989   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:11.586068   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:11.586098   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:11.646857   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:11.646892   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:11.663549   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:11.663576   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:11.731362   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:11.731383   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:11.731397   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:08.934638   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:11.434765   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:10.738608   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:12.740637   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:10.042514   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:12.042731   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.042952   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.315531   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:14.330485   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:14.330544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:14.363403   78080 cri.go:89] found id: ""
	I0729 18:30:14.363433   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.363444   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:14.363451   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:14.363516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:14.401204   78080 cri.go:89] found id: ""
	I0729 18:30:14.401227   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.401234   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:14.401240   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:14.401301   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:14.436737   78080 cri.go:89] found id: ""
	I0729 18:30:14.436765   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.436775   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:14.436782   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:14.436844   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:14.471376   78080 cri.go:89] found id: ""
	I0729 18:30:14.471403   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.471411   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:14.471419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:14.471478   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:14.506883   78080 cri.go:89] found id: ""
	I0729 18:30:14.506914   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.506925   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:14.506932   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:14.506990   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:14.546444   78080 cri.go:89] found id: ""
	I0729 18:30:14.546469   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.546479   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:14.546486   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:14.546552   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:14.580282   78080 cri.go:89] found id: ""
	I0729 18:30:14.580313   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.580320   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:14.580326   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:14.580387   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:14.614185   78080 cri.go:89] found id: ""
	I0729 18:30:14.614210   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.614220   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:14.614231   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:14.614246   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:14.652588   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:14.652610   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:14.706056   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:14.706090   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:14.719332   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:14.719356   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:14.792087   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:14.792115   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:14.792136   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:13.934967   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:16.435238   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.740676   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:17.239466   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:19.239656   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:16.541564   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:18.547053   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:17.375639   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:17.389473   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:17.389535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:17.424485   78080 cri.go:89] found id: ""
	I0729 18:30:17.424513   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.424521   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:17.424527   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:17.424572   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:17.461100   78080 cri.go:89] found id: ""
	I0729 18:30:17.461129   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.461136   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:17.461141   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:17.461191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:17.494866   78080 cri.go:89] found id: ""
	I0729 18:30:17.494894   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.494902   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:17.494907   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:17.494983   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:17.529897   78080 cri.go:89] found id: ""
	I0729 18:30:17.529924   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.529934   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:17.529940   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:17.530002   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:17.569870   78080 cri.go:89] found id: ""
	I0729 18:30:17.569897   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.569905   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:17.569910   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:17.569958   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:17.605324   78080 cri.go:89] found id: ""
	I0729 18:30:17.605364   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.605384   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:17.605392   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:17.605457   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:17.640552   78080 cri.go:89] found id: ""
	I0729 18:30:17.640583   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.640595   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:17.640602   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:17.640668   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:17.679769   78080 cri.go:89] found id: ""
	I0729 18:30:17.679800   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.679808   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:17.679827   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:17.679843   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:17.757782   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:17.757814   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:17.803850   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:17.803878   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:17.857987   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:17.858017   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:17.871062   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:17.871086   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:17.940456   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:20.441171   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:20.454752   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:20.454824   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:20.490744   78080 cri.go:89] found id: ""
	I0729 18:30:20.490773   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.490783   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:20.490791   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:20.490853   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:20.524406   78080 cri.go:89] found id: ""
	I0729 18:30:20.524437   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.524448   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:20.524463   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:20.524515   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:20.559225   78080 cri.go:89] found id: ""
	I0729 18:30:20.559257   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.559268   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:20.559275   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:20.559337   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:20.595297   78080 cri.go:89] found id: ""
	I0729 18:30:20.595324   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.595355   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:20.595364   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:20.595436   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:20.632176   78080 cri.go:89] found id: ""
	I0729 18:30:20.632204   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.632215   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:20.632222   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:20.632282   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:20.676600   78080 cri.go:89] found id: ""
	I0729 18:30:20.676625   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.676632   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:20.676638   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:20.676734   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:20.717920   78080 cri.go:89] found id: ""
	I0729 18:30:20.717945   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.717955   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:20.717966   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:20.718021   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:20.756217   78080 cri.go:89] found id: ""
	I0729 18:30:20.756243   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.756253   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:20.756262   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:20.756277   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:20.837150   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:20.837189   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:20.876023   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:20.876050   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:20.932402   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:20.932429   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:20.947422   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:20.947454   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:21.022698   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:18.934790   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.434992   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.242999   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.739073   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.042689   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.042794   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.523141   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:23.538019   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:23.538098   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:23.576953   78080 cri.go:89] found id: ""
	I0729 18:30:23.576979   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.576991   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:23.576998   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:23.577060   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:23.613052   78080 cri.go:89] found id: ""
	I0729 18:30:23.613083   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.613094   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:23.613100   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:23.613170   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:23.648694   78080 cri.go:89] found id: ""
	I0729 18:30:23.648717   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.648725   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:23.648730   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:23.648775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:23.680939   78080 cri.go:89] found id: ""
	I0729 18:30:23.680965   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.680972   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:23.680977   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:23.681032   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:23.716529   78080 cri.go:89] found id: ""
	I0729 18:30:23.716556   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.716564   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:23.716569   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:23.716628   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:23.756833   78080 cri.go:89] found id: ""
	I0729 18:30:23.756860   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.756868   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:23.756873   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:23.756918   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:23.796436   78080 cri.go:89] found id: ""
	I0729 18:30:23.796460   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.796467   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:23.796472   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:23.796519   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:23.839877   78080 cri.go:89] found id: ""
	I0729 18:30:23.839906   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.839914   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:23.839922   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:23.839934   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:23.879423   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:23.879447   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:23.928379   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:23.928408   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:23.942639   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:23.942669   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:24.014068   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:24.014095   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:24.014110   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:26.597923   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:26.610877   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:26.610945   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:26.647550   78080 cri.go:89] found id: ""
	I0729 18:30:26.647579   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.647590   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:26.647598   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:26.647655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:26.681552   78080 cri.go:89] found id: ""
	I0729 18:30:26.681581   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.681589   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:26.681595   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:26.681660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:26.714475   78080 cri.go:89] found id: ""
	I0729 18:30:26.714503   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.714513   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:26.714519   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:26.714588   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:26.748671   78080 cri.go:89] found id: ""
	I0729 18:30:26.748697   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.748707   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:26.748714   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:26.748775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:26.781380   78080 cri.go:89] found id: ""
	I0729 18:30:26.781406   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.781421   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:26.781429   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:26.781483   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:26.815201   78080 cri.go:89] found id: ""
	I0729 18:30:26.815230   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.815243   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:26.815251   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:26.815318   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:26.848600   78080 cri.go:89] found id: ""
	I0729 18:30:26.848628   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.848637   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:26.848644   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:26.848724   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:26.883828   78080 cri.go:89] found id: ""
	I0729 18:30:26.883872   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.883883   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:26.883893   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:26.883908   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:26.936955   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:26.936987   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:26.952212   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:26.952238   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:27.019389   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:27.019413   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:27.019426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:27.095654   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:27.095682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:23.935397   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:26.435231   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:26.238749   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:28.239699   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:25.044320   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:27.542022   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:29.542274   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:29.637269   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:29.652138   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:29.652211   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:29.691063   78080 cri.go:89] found id: ""
	I0729 18:30:29.691094   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.691104   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:29.691111   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:29.691173   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:29.725188   78080 cri.go:89] found id: ""
	I0729 18:30:29.725224   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.725232   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:29.725240   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:29.725308   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:29.764118   78080 cri.go:89] found id: ""
	I0729 18:30:29.764149   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.764159   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:29.764167   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:29.764232   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:29.797884   78080 cri.go:89] found id: ""
	I0729 18:30:29.797909   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.797919   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:29.797927   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:29.797989   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:29.838784   78080 cri.go:89] found id: ""
	I0729 18:30:29.838808   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.838815   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:29.838821   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:29.838885   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:29.872394   78080 cri.go:89] found id: ""
	I0729 18:30:29.872420   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.872427   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:29.872433   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:29.872491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:29.908966   78080 cri.go:89] found id: ""
	I0729 18:30:29.908995   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.909012   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:29.909020   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:29.909081   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:29.946322   78080 cri.go:89] found id: ""
	I0729 18:30:29.946344   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.946352   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:29.946371   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:29.946386   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:30.019133   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:30.019166   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:30.019179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:30.096499   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:30.096532   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:30.136487   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:30.136519   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:30.187341   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:30.187374   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:28.435472   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:30.934817   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:30.739101   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.742029   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.042850   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:34.042919   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.703546   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:32.716981   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:32.717042   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:32.753275   78080 cri.go:89] found id: ""
	I0729 18:30:32.753307   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.753318   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:32.753326   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:32.753393   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:32.789075   78080 cri.go:89] found id: ""
	I0729 18:30:32.789105   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.789116   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:32.789123   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:32.789185   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:32.822945   78080 cri.go:89] found id: ""
	I0729 18:30:32.822971   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.822979   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:32.822984   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:32.823033   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:32.856523   78080 cri.go:89] found id: ""
	I0729 18:30:32.856577   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.856589   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:32.856597   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:32.856661   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:32.895768   78080 cri.go:89] found id: ""
	I0729 18:30:32.895798   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.895810   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:32.895817   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:32.895876   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:32.934990   78080 cri.go:89] found id: ""
	I0729 18:30:32.935030   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.935042   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:32.935054   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:32.935132   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:32.970924   78080 cri.go:89] found id: ""
	I0729 18:30:32.970949   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.970957   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:32.970964   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:32.971022   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:33.004133   78080 cri.go:89] found id: ""
	I0729 18:30:33.004164   78080 logs.go:276] 0 containers: []
	W0729 18:30:33.004173   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:33.004182   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:33.004202   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:33.043432   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:33.043467   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:33.095517   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:33.095554   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:33.108859   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:33.108889   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:33.180661   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:33.180681   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:33.180696   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:35.763324   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:35.777060   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:35.777138   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:35.812601   78080 cri.go:89] found id: ""
	I0729 18:30:35.812636   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.812647   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:35.812654   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:35.812719   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:35.848116   78080 cri.go:89] found id: ""
	I0729 18:30:35.848161   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.848172   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:35.848179   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:35.848240   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:35.895786   78080 cri.go:89] found id: ""
	I0729 18:30:35.895817   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.895829   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:35.895837   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:35.895911   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:35.936753   78080 cri.go:89] found id: ""
	I0729 18:30:35.936780   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.936787   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:35.936794   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:35.936848   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:35.971321   78080 cri.go:89] found id: ""
	I0729 18:30:35.971349   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.971358   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:35.971371   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:35.971434   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:36.018702   78080 cri.go:89] found id: ""
	I0729 18:30:36.018725   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.018732   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:36.018737   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:36.018792   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:36.054829   78080 cri.go:89] found id: ""
	I0729 18:30:36.054865   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.054875   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:36.054882   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:36.054948   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:36.087456   78080 cri.go:89] found id: ""
	I0729 18:30:36.087483   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.087492   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:36.087500   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:36.087512   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:36.140919   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:36.140951   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:36.155581   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:36.155614   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:36.227617   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:36.227642   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:36.227669   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:36.304610   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:36.304651   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:32.935270   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:34.935362   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:35.239258   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:37.242161   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:39.739031   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:36.043489   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:38.542041   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:38.843099   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:38.857571   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:38.857626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:38.890760   78080 cri.go:89] found id: ""
	I0729 18:30:38.890790   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.890801   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:38.890809   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:38.890884   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:38.932701   78080 cri.go:89] found id: ""
	I0729 18:30:38.932738   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.932748   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:38.932755   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:38.932812   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:38.967379   78080 cri.go:89] found id: ""
	I0729 18:30:38.967406   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.967416   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:38.967430   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:38.967490   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:39.000419   78080 cri.go:89] found id: ""
	I0729 18:30:39.000450   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.000459   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:39.000466   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:39.000528   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:39.033764   78080 cri.go:89] found id: ""
	I0729 18:30:39.033793   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.033802   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:39.033807   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:39.033857   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:39.070904   78080 cri.go:89] found id: ""
	I0729 18:30:39.070933   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.070944   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:39.070951   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:39.071010   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:39.107444   78080 cri.go:89] found id: ""
	I0729 18:30:39.107471   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.107480   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:39.107488   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:39.107549   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:39.141392   78080 cri.go:89] found id: ""
	I0729 18:30:39.141423   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.141436   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:39.141449   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:39.141464   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:39.154874   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:39.154905   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:39.229370   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:39.229396   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:39.229413   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:39.310508   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:39.310538   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:39.352547   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:39.352569   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:41.908463   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:41.922132   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:41.922209   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:41.960404   78080 cri.go:89] found id: ""
	I0729 18:30:41.960431   78080 logs.go:276] 0 containers: []
	W0729 18:30:41.960439   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:41.960444   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:41.960498   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:41.994082   78080 cri.go:89] found id: ""
	I0729 18:30:41.994110   78080 logs.go:276] 0 containers: []
	W0729 18:30:41.994117   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:41.994123   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:41.994177   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:42.030301   78080 cri.go:89] found id: ""
	I0729 18:30:42.030322   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.030330   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:42.030336   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:42.030401   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:42.064310   78080 cri.go:89] found id: ""
	I0729 18:30:42.064339   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.064349   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:42.064356   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:42.064413   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:42.097705   78080 cri.go:89] found id: ""
	I0729 18:30:42.097738   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.097748   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:42.097761   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:42.097819   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:42.133254   78080 cri.go:89] found id: ""
	I0729 18:30:42.133282   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.133292   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:42.133299   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:42.133361   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:42.170028   78080 cri.go:89] found id: ""
	I0729 18:30:42.170054   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.170063   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:42.170075   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:42.170141   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:42.205680   78080 cri.go:89] found id: ""
	I0729 18:30:42.205712   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.205723   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:42.205736   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:42.205749   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:37.442211   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:39.934866   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:41.935293   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:42.240035   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:41.041897   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:43.042300   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:42.246322   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:42.246350   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:42.300852   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:42.300884   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:42.316306   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:42.316333   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:42.389898   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:42.389920   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:42.389934   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:44.971238   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:44.984796   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:44.984846   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:45.021842   78080 cri.go:89] found id: ""
	I0729 18:30:45.021868   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.021877   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:45.021885   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:45.021958   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:45.059353   78080 cri.go:89] found id: ""
	I0729 18:30:45.059377   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.059387   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:45.059394   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:45.059456   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:45.094867   78080 cri.go:89] found id: ""
	I0729 18:30:45.094900   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.094911   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:45.094918   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:45.094974   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:45.128589   78080 cri.go:89] found id: ""
	I0729 18:30:45.128614   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.128622   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:45.128628   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:45.128671   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:45.160137   78080 cri.go:89] found id: ""
	I0729 18:30:45.160165   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.160172   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:45.160177   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:45.160228   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:45.205757   78080 cri.go:89] found id: ""
	I0729 18:30:45.205780   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.205787   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:45.205793   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:45.205840   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:45.250056   78080 cri.go:89] found id: ""
	I0729 18:30:45.250084   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.250091   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:45.250096   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:45.250179   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:45.285349   78080 cri.go:89] found id: ""
	I0729 18:30:45.285372   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.285380   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:45.285389   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:45.285401   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:45.364188   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:45.364218   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:45.412638   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:45.412660   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:45.467713   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:45.467745   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:45.483811   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:45.483835   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:45.564866   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:44.434921   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:46.934237   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:44.740648   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:47.239253   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:49.240229   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:45.043415   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:47.542757   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:49.543251   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:48.065579   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:48.079441   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:48.079511   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:48.115540   78080 cri.go:89] found id: ""
	I0729 18:30:48.115569   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.115578   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:48.115586   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:48.115670   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:48.151810   78080 cri.go:89] found id: ""
	I0729 18:30:48.151834   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.151841   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:48.151847   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:48.151913   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:48.187459   78080 cri.go:89] found id: ""
	I0729 18:30:48.187490   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.187500   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:48.187508   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:48.187568   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:48.226804   78080 cri.go:89] found id: ""
	I0729 18:30:48.226835   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.226846   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:48.226853   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:48.226916   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:48.260413   78080 cri.go:89] found id: ""
	I0729 18:30:48.260439   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.260448   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:48.260455   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:48.260517   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:48.296719   78080 cri.go:89] found id: ""
	I0729 18:30:48.296743   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.296751   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:48.296756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:48.296806   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:48.331969   78080 cri.go:89] found id: ""
	I0729 18:30:48.331995   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.332002   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:48.332008   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:48.332055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:48.370593   78080 cri.go:89] found id: ""
	I0729 18:30:48.370618   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.370626   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:48.370634   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:48.370645   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:48.410653   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:48.410679   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:48.465467   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:48.465503   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:48.480025   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:48.480053   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:48.557806   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:48.557824   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:48.557840   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:51.140743   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:51.153970   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:51.154046   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:51.187826   78080 cri.go:89] found id: ""
	I0729 18:30:51.187851   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.187862   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:51.187868   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:51.187922   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:51.226140   78080 cri.go:89] found id: ""
	I0729 18:30:51.226172   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.226182   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:51.226189   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:51.226255   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:51.262321   78080 cri.go:89] found id: ""
	I0729 18:30:51.262349   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.262357   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:51.262378   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:51.262440   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:51.295356   78080 cri.go:89] found id: ""
	I0729 18:30:51.295383   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.295395   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:51.295403   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:51.295467   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:51.328320   78080 cri.go:89] found id: ""
	I0729 18:30:51.328349   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.328361   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:51.328367   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:51.328424   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:51.364202   78080 cri.go:89] found id: ""
	I0729 18:30:51.364233   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.364242   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:51.364249   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:51.364313   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:51.405500   78080 cri.go:89] found id: ""
	I0729 18:30:51.405529   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.405538   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:51.405544   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:51.405606   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:51.443519   78080 cri.go:89] found id: ""
	I0729 18:30:51.443541   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.443548   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:51.443556   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:51.443567   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:51.495560   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:51.495599   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:51.512152   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:51.512178   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:51.590972   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:51.590992   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:51.591021   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:51.688717   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:51.688757   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:48.934577   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:51.437173   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:51.739680   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.238626   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:52.044254   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.545288   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.256011   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:54.270602   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:54.270653   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:54.311547   78080 cri.go:89] found id: ""
	I0729 18:30:54.311574   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.311584   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:54.311592   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:54.311655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:54.347559   78080 cri.go:89] found id: ""
	I0729 18:30:54.347591   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.347602   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:54.347610   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:54.347675   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:54.382180   78080 cri.go:89] found id: ""
	I0729 18:30:54.382205   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.382212   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:54.382217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:54.382264   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:54.415560   78080 cri.go:89] found id: ""
	I0729 18:30:54.415587   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.415594   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:54.415600   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:54.415655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:54.450313   78080 cri.go:89] found id: ""
	I0729 18:30:54.450341   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.450351   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:54.450372   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:54.450439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:54.484649   78080 cri.go:89] found id: ""
	I0729 18:30:54.484678   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.484687   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:54.484694   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:54.484741   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:54.520170   78080 cri.go:89] found id: ""
	I0729 18:30:54.520204   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.520212   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:54.520220   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:54.520270   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:54.562724   78080 cri.go:89] found id: ""
	I0729 18:30:54.562753   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.562762   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:54.562772   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:54.562788   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:54.617461   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:54.617498   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:54.630970   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:54.630993   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:54.699332   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:54.699353   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:54.699366   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:54.779240   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:54.779276   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:53.934151   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:56.434549   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:56.239554   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:58.239583   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:57.041845   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:59.042164   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:57.318673   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:57.332789   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:57.332845   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:57.370434   78080 cri.go:89] found id: ""
	I0729 18:30:57.370461   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.370486   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:57.370492   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:57.370547   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:57.420694   78080 cri.go:89] found id: ""
	I0729 18:30:57.420724   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.420735   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:57.420742   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:57.420808   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:57.469245   78080 cri.go:89] found id: ""
	I0729 18:30:57.469271   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.469282   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:57.469288   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:57.469355   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:57.524937   78080 cri.go:89] found id: ""
	I0729 18:30:57.524963   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.524970   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:57.524976   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:57.525031   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:57.566803   78080 cri.go:89] found id: ""
	I0729 18:30:57.566830   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.566840   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:57.566847   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:57.566910   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:57.602786   78080 cri.go:89] found id: ""
	I0729 18:30:57.602814   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.602821   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:57.602826   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:57.602891   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:57.639319   78080 cri.go:89] found id: ""
	I0729 18:30:57.639347   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.639355   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:57.639361   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:57.639408   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:57.672580   78080 cri.go:89] found id: ""
	I0729 18:30:57.672610   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.672621   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:57.672632   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:57.672647   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:57.751550   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:57.751572   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:57.751586   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:57.840057   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:57.840097   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:57.884698   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:57.884737   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:57.944468   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:57.944497   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:00.459605   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:00.473079   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:00.473138   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:00.508492   78080 cri.go:89] found id: ""
	I0729 18:31:00.508525   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.508536   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:00.508543   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:00.508604   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:00.544844   78080 cri.go:89] found id: ""
	I0729 18:31:00.544875   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.544886   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:00.544899   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:00.544960   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:00.578402   78080 cri.go:89] found id: ""
	I0729 18:31:00.578432   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.578443   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:00.578450   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:00.578508   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:00.611886   78080 cri.go:89] found id: ""
	I0729 18:31:00.611913   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.611922   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:00.611928   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:00.611989   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:00.649126   78080 cri.go:89] found id: ""
	I0729 18:31:00.649153   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.649162   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:00.649168   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:00.649229   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:00.686534   78080 cri.go:89] found id: ""
	I0729 18:31:00.686561   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.686571   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:00.686578   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:00.686639   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:00.718656   78080 cri.go:89] found id: ""
	I0729 18:31:00.718680   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.718690   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:00.718696   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:00.718755   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:00.752740   78080 cri.go:89] found id: ""
	I0729 18:31:00.752766   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.752776   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:00.752786   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:00.752800   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:00.804293   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:00.804323   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:00.817988   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:00.818010   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:00.892178   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:00.892210   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:00.892231   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:00.973164   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:00.973199   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:58.434888   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:00.934518   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:00.239908   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:02.240038   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:04.240420   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:01.542080   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:03.542877   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:04.036213   77627 pod_ready.go:81] duration metric: took 4m0.000109353s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" ...
	E0729 18:31:04.036235   77627 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 18:31:04.036250   77627 pod_ready.go:38] duration metric: took 4m10.564329435s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:04.036294   77627 kubeadm.go:597] duration metric: took 4m18.357564209s to restartPrimaryControlPlane
	W0729 18:31:04.036359   77627 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:31:04.036388   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:31:03.512105   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:03.526536   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:03.526602   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:03.561579   78080 cri.go:89] found id: ""
	I0729 18:31:03.561604   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.561614   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:03.561621   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:03.561681   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:03.603995   78080 cri.go:89] found id: ""
	I0729 18:31:03.604019   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.604028   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:03.604033   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:03.604079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:03.640879   78080 cri.go:89] found id: ""
	I0729 18:31:03.640902   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.640910   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:03.640917   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:03.640971   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:03.675262   78080 cri.go:89] found id: ""
	I0729 18:31:03.675288   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.675296   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:03.675302   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:03.675349   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:03.708094   78080 cri.go:89] found id: ""
	I0729 18:31:03.708128   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.708137   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:03.708142   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:03.708190   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:03.748262   78080 cri.go:89] found id: ""
	I0729 18:31:03.748287   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.748298   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:03.748304   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:03.748360   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:03.789758   78080 cri.go:89] found id: ""
	I0729 18:31:03.789788   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.789800   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:03.789806   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:03.789893   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:03.829253   78080 cri.go:89] found id: ""
	I0729 18:31:03.829280   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.829291   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:03.829299   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:03.829317   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:03.883012   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:03.883044   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:03.899264   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:03.899294   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:03.970241   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:03.970261   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:03.970274   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:04.056205   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:04.056244   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:06.604919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:06.619163   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:06.619242   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:06.656939   78080 cri.go:89] found id: ""
	I0729 18:31:06.656970   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.656982   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:06.656989   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:06.657075   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:06.692577   78080 cri.go:89] found id: ""
	I0729 18:31:06.692608   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.692624   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:06.692632   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:06.692695   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:06.730045   78080 cri.go:89] found id: ""
	I0729 18:31:06.730077   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.730088   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:06.730096   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:06.730179   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:06.771794   78080 cri.go:89] found id: ""
	I0729 18:31:06.771820   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.771830   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:06.771838   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:06.771905   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:06.806149   78080 cri.go:89] found id: ""
	I0729 18:31:06.806177   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.806187   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:06.806194   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:06.806252   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:06.851875   78080 cri.go:89] found id: ""
	I0729 18:31:06.851905   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.851923   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:06.851931   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:06.851996   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:06.890335   78080 cri.go:89] found id: ""
	I0729 18:31:06.890382   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.890393   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:06.890399   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:06.890460   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:06.928873   78080 cri.go:89] found id: ""
	I0729 18:31:06.928902   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.928912   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:06.928922   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:06.928935   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:06.944269   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:06.944295   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:07.011658   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:07.011682   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:07.011697   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:07.109899   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:07.109948   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:07.154569   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:07.154600   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:02.935054   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:05.434752   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:06.242994   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:08.738448   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:09.709101   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:09.722387   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:09.722461   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:09.760443   78080 cri.go:89] found id: ""
	I0729 18:31:09.760471   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.760481   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:09.760488   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:09.760551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:09.796177   78080 cri.go:89] found id: ""
	I0729 18:31:09.796200   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.796209   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:09.796214   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:09.796264   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:09.831955   78080 cri.go:89] found id: ""
	I0729 18:31:09.831983   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.831990   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:09.831995   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:09.832055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:09.863913   78080 cri.go:89] found id: ""
	I0729 18:31:09.863939   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.863949   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:09.863956   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:09.864014   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:09.897553   78080 cri.go:89] found id: ""
	I0729 18:31:09.897575   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.897583   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:09.897588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:09.897645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:09.935203   78080 cri.go:89] found id: ""
	I0729 18:31:09.935221   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.935228   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:09.935238   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:09.935296   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:09.971098   78080 cri.go:89] found id: ""
	I0729 18:31:09.971125   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.971135   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:09.971142   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:09.971224   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:10.006760   78080 cri.go:89] found id: ""
	I0729 18:31:10.006794   78080 logs.go:276] 0 containers: []
	W0729 18:31:10.006804   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:10.006815   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:10.006830   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:10.056037   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:10.056066   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:10.070633   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:10.070660   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:10.139953   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:10.139983   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:10.140002   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:10.220748   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:10.220781   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:07.436020   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:09.934218   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:11.934977   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:10.740109   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:13.239440   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:12.766391   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:12.779837   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:12.779889   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:12.813910   78080 cri.go:89] found id: ""
	I0729 18:31:12.813941   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.813951   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:12.813959   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:12.814008   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:12.848811   78080 cri.go:89] found id: ""
	I0729 18:31:12.848854   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.848865   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:12.848872   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:12.848927   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:12.884740   78080 cri.go:89] found id: ""
	I0729 18:31:12.884769   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.884780   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:12.884786   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:12.884833   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:12.923826   78080 cri.go:89] found id: ""
	I0729 18:31:12.923859   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.923870   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:12.923878   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:12.923930   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:12.959127   78080 cri.go:89] found id: ""
	I0729 18:31:12.959157   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.959168   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:12.959175   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:12.959245   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:12.994384   78080 cri.go:89] found id: ""
	I0729 18:31:12.994417   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.994430   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:12.994439   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:12.994506   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:13.027854   78080 cri.go:89] found id: ""
	I0729 18:31:13.027883   78080 logs.go:276] 0 containers: []
	W0729 18:31:13.027892   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:13.027897   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:13.027951   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:13.062270   78080 cri.go:89] found id: ""
	I0729 18:31:13.062300   78080 logs.go:276] 0 containers: []
	W0729 18:31:13.062310   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:13.062321   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:13.062334   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:13.114473   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:13.114500   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:13.127820   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:13.127845   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:13.195830   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:13.195848   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:13.195862   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:13.281711   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:13.281748   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:15.824456   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:15.837532   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:15.837587   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:15.871706   78080 cri.go:89] found id: ""
	I0729 18:31:15.871739   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.871750   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:15.871757   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:15.871817   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:15.906882   78080 cri.go:89] found id: ""
	I0729 18:31:15.906905   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.906912   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:15.906917   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:15.906976   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:15.943015   78080 cri.go:89] found id: ""
	I0729 18:31:15.943043   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.943057   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:15.943065   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:15.943126   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:15.980501   78080 cri.go:89] found id: ""
	I0729 18:31:15.980528   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.980536   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:15.980542   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:15.980588   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:16.014148   78080 cri.go:89] found id: ""
	I0729 18:31:16.014176   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.014183   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:16.014189   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:16.014236   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:16.048296   78080 cri.go:89] found id: ""
	I0729 18:31:16.048319   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.048326   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:16.048334   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:16.048392   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:16.084328   78080 cri.go:89] found id: ""
	I0729 18:31:16.084350   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.084358   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:16.084363   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:16.084411   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:16.120048   78080 cri.go:89] found id: ""
	I0729 18:31:16.120076   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.120084   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:16.120092   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:16.120105   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:16.173476   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:16.173503   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:16.190200   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:16.190232   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:16.261993   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:16.262014   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:16.262026   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:16.340298   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:16.340331   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:14.434706   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:16.936150   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:15.739493   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:18.239834   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:18.883152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:18.897292   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:18.897360   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:18.931276   78080 cri.go:89] found id: ""
	I0729 18:31:18.931303   78080 logs.go:276] 0 containers: []
	W0729 18:31:18.931313   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:18.931321   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:18.931379   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:18.975803   78080 cri.go:89] found id: ""
	I0729 18:31:18.975832   78080 logs.go:276] 0 containers: []
	W0729 18:31:18.975843   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:18.975853   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:18.975912   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:19.012920   78080 cri.go:89] found id: ""
	I0729 18:31:19.012951   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.012963   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:19.012970   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:19.013031   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:19.047640   78080 cri.go:89] found id: ""
	I0729 18:31:19.047667   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.047679   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:19.047687   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:19.047749   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:19.082495   78080 cri.go:89] found id: ""
	I0729 18:31:19.082522   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.082533   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:19.082540   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:19.082591   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:19.117988   78080 cri.go:89] found id: ""
	I0729 18:31:19.118016   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.118027   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:19.118034   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:19.118096   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:19.153725   78080 cri.go:89] found id: ""
	I0729 18:31:19.153753   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.153764   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:19.153771   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:19.153836   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:19.192827   78080 cri.go:89] found id: ""
	I0729 18:31:19.192857   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.192868   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:19.192879   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:19.192894   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:19.208802   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:19.208833   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:19.285877   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:19.285897   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:19.285909   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:19.366563   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:19.366598   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:19.404563   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:19.404590   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:21.958449   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:21.971674   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:21.971739   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:22.006231   78080 cri.go:89] found id: ""
	I0729 18:31:22.006253   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.006261   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:22.006266   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:22.006314   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:22.042575   78080 cri.go:89] found id: ""
	I0729 18:31:22.042599   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.042609   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:22.042616   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:22.042679   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:22.079446   78080 cri.go:89] found id: ""
	I0729 18:31:22.079471   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.079482   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:22.079489   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:22.079554   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:22.115940   78080 cri.go:89] found id: ""
	I0729 18:31:22.115967   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.115976   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:22.115984   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:22.116055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:22.149420   78080 cri.go:89] found id: ""
	I0729 18:31:22.149447   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.149456   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:22.149461   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:22.149511   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:22.182992   78080 cri.go:89] found id: ""
	I0729 18:31:22.183019   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.183027   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:22.183032   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:22.183090   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:22.218441   78080 cri.go:89] found id: ""
	I0729 18:31:22.218474   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.218487   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:22.218497   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:22.218564   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:19.434020   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:21.434806   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:20.739308   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:22.741502   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:22.263135   78080 cri.go:89] found id: ""
	I0729 18:31:22.263164   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.263173   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:22.263183   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:22.263198   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:22.319010   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:22.319049   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:22.333151   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:22.333179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:22.404661   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:22.404683   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:22.404706   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:22.488497   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:22.488537   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:25.032215   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:25.045114   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:25.045191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:25.082244   78080 cri.go:89] found id: ""
	I0729 18:31:25.082278   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.082289   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:25.082299   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:25.082388   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:25.118295   78080 cri.go:89] found id: ""
	I0729 18:31:25.118318   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.118325   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:25.118331   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:25.118395   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:25.157948   78080 cri.go:89] found id: ""
	I0729 18:31:25.157974   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.157984   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:25.157992   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:25.158054   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:25.194708   78080 cri.go:89] found id: ""
	I0729 18:31:25.194734   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.194743   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:25.194751   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:25.194813   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:25.235923   78080 cri.go:89] found id: ""
	I0729 18:31:25.235952   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.235962   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:25.235969   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:25.236032   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:25.271316   78080 cri.go:89] found id: ""
	I0729 18:31:25.271342   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.271353   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:25.271360   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:25.271422   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:25.309399   78080 cri.go:89] found id: ""
	I0729 18:31:25.309427   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.309438   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:25.309446   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:25.309503   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:25.347979   78080 cri.go:89] found id: ""
	I0729 18:31:25.348009   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.348021   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:25.348031   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:25.348046   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:25.400785   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:25.400812   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:25.413891   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:25.413915   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:25.487721   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:25.487752   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:25.487767   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:25.575500   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:25.575531   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:23.935200   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:26.434289   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:25.240961   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:27.738838   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:27.738866   77859 pod_ready.go:81] duration metric: took 4m0.005785253s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	E0729 18:31:27.738877   77859 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 18:31:27.738887   77859 pod_ready.go:38] duration metric: took 4m4.550102816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:27.738903   77859 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:31:27.738934   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:27.738991   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:27.798686   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:27.798710   77859 cri.go:89] found id: ""
	I0729 18:31:27.798717   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:27.798774   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.804769   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:27.804827   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:27.849829   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:27.849849   77859 cri.go:89] found id: ""
	I0729 18:31:27.849857   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:27.849909   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.854472   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:27.854540   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:27.891637   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:27.891659   77859 cri.go:89] found id: ""
	I0729 18:31:27.891668   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:27.891715   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.896663   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:27.896713   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:27.941948   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:27.941968   77859 cri.go:89] found id: ""
	I0729 18:31:27.941976   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:27.942018   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.946770   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:27.946821   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:27.988118   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:27.988139   77859 cri.go:89] found id: ""
	I0729 18:31:27.988147   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:27.988193   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.992474   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:27.992535   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:28.032779   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:28.032801   77859 cri.go:89] found id: ""
	I0729 18:31:28.032811   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:28.032859   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.037791   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:28.037838   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:28.081087   77859 cri.go:89] found id: ""
	I0729 18:31:28.081115   77859 logs.go:276] 0 containers: []
	W0729 18:31:28.081124   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:28.081131   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:28.081183   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:28.123906   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:28.123927   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:28.123933   77859 cri.go:89] found id: ""
	I0729 18:31:28.123940   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:28.123979   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.128737   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.133127   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:28.133201   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:28.182950   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:28.182985   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:28.241873   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:28.241914   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:28.391355   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:28.391389   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:28.447637   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:28.447671   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:28.496815   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:28.496848   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:28.540617   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:28.540651   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:29.063074   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:29.063116   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:29.123348   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:29.123378   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:29.137340   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:29.137365   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:29.174775   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:29.174810   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:29.227526   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:29.227560   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:29.281814   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:29.281844   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:28.121761   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:28.136756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:28.136813   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:28.175461   78080 cri.go:89] found id: ""
	I0729 18:31:28.175491   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.175502   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:28.175509   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:28.175567   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:28.215024   78080 cri.go:89] found id: ""
	I0729 18:31:28.215046   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.215055   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:28.215060   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:28.215122   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:28.253999   78080 cri.go:89] found id: ""
	I0729 18:31:28.254023   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.254031   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:28.254037   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:28.254090   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:28.287902   78080 cri.go:89] found id: ""
	I0729 18:31:28.287929   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.287940   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:28.287948   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:28.288006   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:28.322390   78080 cri.go:89] found id: ""
	I0729 18:31:28.322422   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.322433   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:28.322441   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:28.322500   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:28.356951   78080 cri.go:89] found id: ""
	I0729 18:31:28.356980   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.356991   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:28.356999   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:28.357060   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:28.393439   78080 cri.go:89] found id: ""
	I0729 18:31:28.393461   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.393471   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:28.393477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:28.393535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:28.431827   78080 cri.go:89] found id: ""
	I0729 18:31:28.431858   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.431868   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:28.431878   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:28.431892   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:28.509279   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:28.509315   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:28.564036   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:28.564064   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:28.626970   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:28.627000   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:28.641417   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:28.641446   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:28.713406   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:31.213942   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:31.228942   78080 kubeadm.go:597] duration metric: took 4m3.040952507s to restartPrimaryControlPlane
	W0729 18:31:31.229020   78080 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:31:31.229042   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:31:31.696335   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:31.711230   78080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:31:31.720924   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:31:31.730348   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:31:31.730378   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:31:31.730418   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:31:31.739761   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:31:31.739810   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:31:31.749021   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:31:31.758107   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:31:31.758155   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:31:31.768326   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:31:31.777347   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:31:31.777388   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:31:31.786752   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:31:31.795728   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:31:31.795776   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:31:31.805369   78080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:31:31.883678   78080 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:31:31.883751   78080 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:31:32.040989   78080 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:31:32.041127   78080 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:31:32.041259   78080 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:31:32.261525   78080 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:31:28.434784   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:30.435227   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:32.263137   78080 out.go:204]   - Generating certificates and keys ...
	I0729 18:31:32.263242   78080 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:31:32.263349   78080 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:31:32.263461   78080 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:31:32.263554   78080 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:31:32.263640   78080 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:31:32.263724   78080 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:31:32.263801   78080 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:31:32.263872   78080 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:31:32.263993   78080 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:31:32.264109   78080 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:31:32.264164   78080 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:31:32.264255   78080 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:31:32.435248   78080 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:31:32.509478   78080 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:31:32.737003   78080 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:31:33.079523   78080 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:31:33.099871   78080 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:31:33.101450   78080 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:31:33.101520   78080 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:31:33.242577   78080 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:31:31.826678   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:31.845448   77859 api_server.go:72] duration metric: took 4m16.365262679s to wait for apiserver process to appear ...
	I0729 18:31:31.845478   77859 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:31:31.845519   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:31.845568   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:31.889194   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:31.889226   77859 cri.go:89] found id: ""
	I0729 18:31:31.889236   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:31.889290   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.894167   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:31.894271   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:31.936287   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:31.936306   77859 cri.go:89] found id: ""
	I0729 18:31:31.936315   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:31.936367   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.941051   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:31.941110   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:31.978033   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:31.978057   77859 cri.go:89] found id: ""
	I0729 18:31:31.978066   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:31.978115   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.982632   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:31.982704   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:32.023792   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:32.023812   77859 cri.go:89] found id: ""
	I0729 18:31:32.023820   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:32.023875   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.028309   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:32.028367   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:32.071944   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:32.071966   77859 cri.go:89] found id: ""
	I0729 18:31:32.071975   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:32.072033   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.076171   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:32.076252   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:32.111357   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:32.111379   77859 cri.go:89] found id: ""
	I0729 18:31:32.111389   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:32.111446   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.115718   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:32.115775   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:32.168552   77859 cri.go:89] found id: ""
	I0729 18:31:32.168586   77859 logs.go:276] 0 containers: []
	W0729 18:31:32.168597   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:32.168604   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:32.168686   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:32.210002   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:32.210027   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:32.210034   77859 cri.go:89] found id: ""
	I0729 18:31:32.210043   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:32.210090   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.214929   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.220097   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:32.220121   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:32.270343   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:32.270384   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:32.329269   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:32.329303   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:32.388361   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:32.388388   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:32.430072   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:32.430108   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:32.471669   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:32.471701   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:32.508395   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:32.508424   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:32.548968   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:32.549001   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:32.605269   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:32.605306   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:32.642298   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:32.642330   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:32.659407   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:32.659431   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:32.776509   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:32.776544   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:32.832365   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:32.832395   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:35.748109   77627 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.711694865s)
	I0729 18:31:35.748184   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:35.765137   77627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:31:35.775945   77627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:31:35.786206   77627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:31:35.786232   77627 kubeadm.go:157] found existing configuration files:
	
	I0729 18:31:35.786284   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:31:35.797157   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:31:35.797218   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:31:35.810497   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:31:35.821537   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:31:35.821603   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:31:35.832985   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:31:35.842247   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:31:35.842309   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:31:35.852578   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:31:35.861798   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:31:35.861858   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:31:35.872903   77627 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:31:35.926675   77627 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 18:31:35.926872   77627 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:31:36.089002   77627 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:31:36.089179   77627 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:31:36.089310   77627 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:31:36.321844   77627 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:31:33.244436   78080 out.go:204]   - Booting up control plane ...
	I0729 18:31:33.244570   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:31:33.245677   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:31:33.249530   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:31:33.250262   78080 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:31:33.261418   78080 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:31:36.324255   77627 out.go:204]   - Generating certificates and keys ...
	I0729 18:31:36.324352   77627 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:31:36.324435   77627 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:31:36.324539   77627 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:31:36.324619   77627 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:31:36.324707   77627 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:31:36.324780   77627 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:31:36.324864   77627 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:31:36.324945   77627 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:31:36.325036   77627 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:31:36.325175   77627 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:31:36.325340   77627 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:31:36.325425   77627 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:31:36.815491   77627 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:31:36.870914   77627 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:31:36.957705   77627 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:31:37.074845   77627 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:31:37.220920   77627 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:31:37.221651   77627 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:31:37.224384   77627 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:31:32.435653   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:34.933615   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:36.935070   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:35.792366   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:31:35.801160   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 200:
	ok
	I0729 18:31:35.804043   77859 api_server.go:141] control plane version: v1.30.3
	I0729 18:31:35.804063   77859 api_server.go:131] duration metric: took 3.958578435s to wait for apiserver health ...
	I0729 18:31:35.804072   77859 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:31:35.804099   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:35.804140   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:35.845977   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:35.846003   77859 cri.go:89] found id: ""
	I0729 18:31:35.846018   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:35.846072   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.851227   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:35.851302   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:35.892117   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:35.892142   77859 cri.go:89] found id: ""
	I0729 18:31:35.892158   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:35.892215   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.897136   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:35.897216   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:35.941512   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:35.941532   77859 cri.go:89] found id: ""
	I0729 18:31:35.941541   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:35.941598   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.946072   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:35.946124   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:35.984306   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:35.984327   77859 cri.go:89] found id: ""
	I0729 18:31:35.984335   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:35.984381   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.988605   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:35.988671   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:36.031476   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:36.031504   77859 cri.go:89] found id: ""
	I0729 18:31:36.031514   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:36.031567   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.037262   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:36.037319   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:36.078054   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:36.078076   77859 cri.go:89] found id: ""
	I0729 18:31:36.078084   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:36.078134   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.082628   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:36.082693   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:36.122768   77859 cri.go:89] found id: ""
	I0729 18:31:36.122791   77859 logs.go:276] 0 containers: []
	W0729 18:31:36.122799   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:36.122804   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:36.122849   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:36.166611   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:36.166636   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:36.166642   77859 cri.go:89] found id: ""
	I0729 18:31:36.166650   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:36.166712   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.171240   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.175336   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:36.175354   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:36.233224   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:36.233255   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:36.282788   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:36.282820   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:36.675615   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:36.675660   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:36.731559   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:36.731602   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:36.747814   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:36.747845   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:36.786940   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:36.787036   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:36.829659   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:36.829694   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:36.865907   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:36.865939   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:36.908399   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:36.908427   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:37.012220   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:37.012255   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:37.063429   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:37.063463   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:37.107615   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:37.107654   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:39.655973   77859 system_pods.go:59] 8 kube-system pods found
	I0729 18:31:39.656011   77859 system_pods.go:61] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running
	I0729 18:31:39.656019   77859 system_pods.go:61] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running
	I0729 18:31:39.656025   77859 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running
	I0729 18:31:39.656032   77859 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running
	I0729 18:31:39.656037   77859 system_pods.go:61] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running
	I0729 18:31:39.656043   77859 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running
	I0729 18:31:39.656051   77859 system_pods.go:61] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:31:39.656057   77859 system_pods.go:61] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running
	I0729 18:31:39.656068   77859 system_pods.go:74] duration metric: took 3.851988452s to wait for pod list to return data ...
	I0729 18:31:39.656081   77859 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:31:39.658999   77859 default_sa.go:45] found service account: "default"
	I0729 18:31:39.659024   77859 default_sa.go:55] duration metric: took 2.935237ms for default service account to be created ...
	I0729 18:31:39.659034   77859 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:31:39.664926   77859 system_pods.go:86] 8 kube-system pods found
	I0729 18:31:39.664952   77859 system_pods.go:89] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running
	I0729 18:31:39.664959   77859 system_pods.go:89] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running
	I0729 18:31:39.664966   77859 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running
	I0729 18:31:39.664973   77859 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running
	I0729 18:31:39.664979   77859 system_pods.go:89] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running
	I0729 18:31:39.664987   77859 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running
	I0729 18:31:39.665003   77859 system_pods.go:89] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:31:39.665013   77859 system_pods.go:89] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running
	I0729 18:31:39.665025   77859 system_pods.go:126] duration metric: took 5.974722ms to wait for k8s-apps to be running ...
	I0729 18:31:39.665036   77859 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:31:39.665093   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:39.685280   77859 system_svc.go:56] duration metric: took 20.237099ms WaitForService to wait for kubelet
	I0729 18:31:39.685311   77859 kubeadm.go:582] duration metric: took 4m24.205126513s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:31:39.685336   77859 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:31:39.688419   77859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:31:39.688441   77859 node_conditions.go:123] node cpu capacity is 2
	I0729 18:31:39.688455   77859 node_conditions.go:105] duration metric: took 3.111768ms to run NodePressure ...
	I0729 18:31:39.688470   77859 start.go:241] waiting for startup goroutines ...
	I0729 18:31:39.688483   77859 start.go:246] waiting for cluster config update ...
	I0729 18:31:39.688497   77859 start.go:255] writing updated cluster config ...
	I0729 18:31:39.688830   77859 ssh_runner.go:195] Run: rm -f paused
	I0729 18:31:39.739685   77859 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:31:39.741763   77859 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-502055" cluster and "default" namespace by default
	I0729 18:31:37.226046   77627 out.go:204]   - Booting up control plane ...
	I0729 18:31:37.226163   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:31:37.227852   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:31:37.228710   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:31:37.248177   77627 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:31:37.248863   77627 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:31:37.248915   77627 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:31:37.376905   77627 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:31:37.377030   77627 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:31:37.878928   77627 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.066447ms
	I0729 18:31:37.879057   77627 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:31:38.935622   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:41.433736   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:42.880479   77627 kubeadm.go:310] [api-check] The API server is healthy after 5.001345894s
	I0729 18:31:42.892513   77627 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:31:42.910175   77627 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:31:42.948111   77627 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:31:42.948340   77627 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-409322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:31:42.966823   77627 kubeadm.go:310] [bootstrap-token] Using token: f8a98i.3r2is78gllm02lfe
	I0729 18:31:42.968170   77627 out.go:204]   - Configuring RBAC rules ...
	I0729 18:31:42.968304   77627 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:31:42.978257   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:31:42.986458   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:31:42.989744   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:31:42.992484   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:31:42.995162   77627 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:31:43.287739   77627 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:31:43.726370   77627 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:31:44.290225   77627 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:31:44.291166   77627 kubeadm.go:310] 
	I0729 18:31:44.291267   77627 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:31:44.291278   77627 kubeadm.go:310] 
	I0729 18:31:44.291392   77627 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:31:44.291401   77627 kubeadm.go:310] 
	I0729 18:31:44.291436   77627 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:31:44.291530   77627 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:31:44.291589   77627 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:31:44.291606   77627 kubeadm.go:310] 
	I0729 18:31:44.291701   77627 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:31:44.291713   77627 kubeadm.go:310] 
	I0729 18:31:44.291788   77627 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:31:44.291797   77627 kubeadm.go:310] 
	I0729 18:31:44.291860   77627 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:31:44.291954   77627 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:31:44.292052   77627 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:31:44.292070   77627 kubeadm.go:310] 
	I0729 18:31:44.292167   77627 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:31:44.292269   77627 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:31:44.292280   77627 kubeadm.go:310] 
	I0729 18:31:44.292402   77627 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f8a98i.3r2is78gllm02lfe \
	I0729 18:31:44.292543   77627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 18:31:44.292585   77627 kubeadm.go:310] 	--control-plane 
	I0729 18:31:44.292595   77627 kubeadm.go:310] 
	I0729 18:31:44.292710   77627 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:31:44.292732   77627 kubeadm.go:310] 
	I0729 18:31:44.292836   77627 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f8a98i.3r2is78gllm02lfe \
	I0729 18:31:44.293015   77627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 18:31:44.293440   77627 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:31:44.293500   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:31:44.293512   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:31:44.295432   77627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:31:44.296845   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:31:44.308178   77627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:31:44.334403   77627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:31:44.334542   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:44.334562   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-409322 minikube.k8s.io/updated_at=2024_07_29T18_31_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=embed-certs-409322 minikube.k8s.io/primary=true
	I0729 18:31:44.366345   77627 ops.go:34] apiserver oom_adj: -16
	I0729 18:31:44.537970   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:43.433884   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:45.434714   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:45.039020   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:45.538831   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:46.038700   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:46.538761   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.038725   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.538100   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:48.038309   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:48.538896   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:49.039011   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:49.538333   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.435067   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:49.934658   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:50.038548   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:50.538590   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:51.038131   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:51.538253   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.038599   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.538827   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:53.038077   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:53.538860   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:54.038530   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:54.538952   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.433783   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:54.434442   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:56.434864   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:55.038263   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:55.538050   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:56.038006   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:56.538079   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.038042   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.538146   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.696274   77627 kubeadm.go:1113] duration metric: took 13.36179604s to wait for elevateKubeSystemPrivileges
	I0729 18:31:57.696308   77627 kubeadm.go:394] duration metric: took 5m12.066483926s to StartCluster
	I0729 18:31:57.696324   77627 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:31:57.696406   77627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:31:57.698195   77627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:31:57.698479   77627 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:31:57.698592   77627 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:31:57.698674   77627 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-409322"
	I0729 18:31:57.698688   77627 addons.go:69] Setting metrics-server=true in profile "embed-certs-409322"
	I0729 18:31:57.698695   77627 addons.go:69] Setting default-storageclass=true in profile "embed-certs-409322"
	I0729 18:31:57.698714   77627 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-409322"
	I0729 18:31:57.698719   77627 addons.go:234] Setting addon metrics-server=true in "embed-certs-409322"
	W0729 18:31:57.698723   77627 addons.go:243] addon storage-provisioner should already be in state true
	W0729 18:31:57.698729   77627 addons.go:243] addon metrics-server should already be in state true
	I0729 18:31:57.698733   77627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-409322"
	I0729 18:31:57.698755   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.698676   77627 config.go:182] Loaded profile config "embed-certs-409322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:31:57.698760   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.699157   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699169   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699207   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.699170   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699229   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.699209   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.700201   77627 out.go:177] * Verifying Kubernetes components...
	I0729 18:31:57.701577   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:31:57.715130   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44873
	I0729 18:31:57.715156   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I0729 18:31:57.715708   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.715759   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.716320   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.716329   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.716344   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.716345   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.716666   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.716672   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.716868   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.717251   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.717283   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.717715   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41041
	I0729 18:31:57.718172   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.718684   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.718709   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.719111   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.719630   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.719670   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.720815   77627 addons.go:234] Setting addon default-storageclass=true in "embed-certs-409322"
	W0729 18:31:57.720839   77627 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:31:57.720870   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.721233   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.721264   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.733757   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0729 18:31:57.734325   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.735372   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.735397   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.735736   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.735928   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.735939   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0729 18:31:57.736244   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.736923   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.736942   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.737318   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.737664   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.739761   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.740354   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.741103   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
	I0729 18:31:57.741489   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.741979   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.741999   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.742296   77627 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:31:57.742348   77627 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:31:57.742400   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.743411   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.743443   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.743498   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:31:57.743515   77627 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:31:57.743537   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.743682   77627 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:31:57.743697   77627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:31:57.743711   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.748331   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.748743   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.748759   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.748941   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.748986   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.749110   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.749290   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.749423   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.749638   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.749650   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.749671   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.749834   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.749940   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.750051   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.760794   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I0729 18:31:57.761136   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.761574   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.761585   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.761954   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.762133   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.764344   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.764532   77627 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:31:57.764541   77627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:31:57.764555   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.767111   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.767485   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.767498   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.767625   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.767763   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.767875   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.768004   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.965911   77627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:31:57.986557   77627 node_ready.go:35] waiting up to 6m0s for node "embed-certs-409322" to be "Ready" ...
	I0729 18:31:57.995790   77627 node_ready.go:49] node "embed-certs-409322" has status "Ready":"True"
	I0729 18:31:57.995809   77627 node_ready.go:38] duration metric: took 9.222398ms for node "embed-certs-409322" to be "Ready" ...
	I0729 18:31:57.995817   77627 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:58.003516   77627 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace to be "Ready" ...
	I0729 18:31:58.047522   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:31:58.053274   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:31:58.053290   77627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:31:58.074101   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:31:58.074127   77627 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:31:58.088159   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:31:58.097491   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:31:58.097518   77627 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:31:58.125335   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:31:58.628396   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628425   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628466   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628480   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628847   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.628909   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.628918   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.628936   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.628946   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628955   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628914   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.628898   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.629017   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.629046   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.629268   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.629281   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.630616   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.630636   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.630649   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.660029   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.660061   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.660339   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.660358   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.975389   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.975414   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.975721   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.975740   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.975750   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.975760   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.976034   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.976051   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.976063   77627 addons.go:475] Verifying addon metrics-server=true in "embed-certs-409322"
	I0729 18:31:58.978172   77627 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 18:31:58.979568   77627 addons.go:510] duration metric: took 1.280977366s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 18:31:58.935700   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:00.935984   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:00.009825   77627 pod_ready.go:92] pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:00.009846   77627 pod_ready.go:81] duration metric: took 2.006300447s for pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:00.009855   77627 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:02.016463   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:04.515885   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:03.432654   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:05.434708   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:06.517308   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:09.016256   77627 pod_ready.go:92] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.016276   77627 pod_ready.go:81] duration metric: took 9.006414116s for pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.016287   77627 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.021639   77627 pod_ready.go:92] pod "etcd-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.021661   77627 pod_ready.go:81] duration metric: took 5.365088ms for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.021672   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.026599   77627 pod_ready.go:92] pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.026618   77627 pod_ready.go:81] duration metric: took 4.939458ms for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.026629   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.031994   77627 pod_ready.go:92] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.032009   77627 pod_ready.go:81] duration metric: took 5.37307ms for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.032020   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kxf5z" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.036180   77627 pod_ready.go:92] pod "kube-proxy-kxf5z" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.036196   77627 pod_ready.go:81] duration metric: took 4.16934ms for pod "kube-proxy-kxf5z" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.036205   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.414950   77627 pod_ready.go:92] pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.414973   77627 pod_ready.go:81] duration metric: took 378.76116ms for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.414981   77627 pod_ready.go:38] duration metric: took 11.419116871s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:09.414995   77627 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:32:09.415042   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:32:09.434210   77627 api_server.go:72] duration metric: took 11.735691998s to wait for apiserver process to appear ...
	I0729 18:32:09.434240   77627 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:32:09.434260   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:32:09.439755   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I0729 18:32:09.440612   77627 api_server.go:141] control plane version: v1.30.3
	I0729 18:32:09.440631   77627 api_server.go:131] duration metric: took 6.382802ms to wait for apiserver health ...
	I0729 18:32:09.440640   77627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:32:09.617533   77627 system_pods.go:59] 9 kube-system pods found
	I0729 18:32:09.617564   77627 system_pods.go:61] "coredns-7db6d8ff4d-wpnfg" [687cbc8f-370a-4b72-bc1c-6ae36efe890e] Running
	I0729 18:32:09.617569   77627 system_pods.go:61] "coredns-7db6d8ff4d-wztpj" [1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7] Running
	I0729 18:32:09.617572   77627 system_pods.go:61] "etcd-embed-certs-409322" [68de54c3-7d47-4e79-a064-08b013b1d910] Running
	I0729 18:32:09.617575   77627 system_pods.go:61] "kube-apiserver-embed-certs-409322" [dc1a0568-ef7c-493f-91fb-7438456daf6d] Running
	I0729 18:32:09.617579   77627 system_pods.go:61] "kube-controller-manager-embed-certs-409322" [da715e8c-2437-487b-b4e0-c93af2f079f7] Running
	I0729 18:32:09.617582   77627 system_pods.go:61] "kube-proxy-kxf5z" [74ed1812-b3bf-429d-b8f1-bdccb3415fb5] Running
	I0729 18:32:09.617584   77627 system_pods.go:61] "kube-scheduler-embed-certs-409322" [188cf21a-9a8a-45de-9a91-9e593626ce6d] Running
	I0729 18:32:09.617591   77627 system_pods.go:61] "metrics-server-569cc877fc-6q4nl" [57dc61cc-7490-49e5-9d03-c81aa5d25aea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:32:09.617596   77627 system_pods.go:61] "storage-provisioner" [b0b1e31d-9b5c-4e82-aea7-56184832c053] Running
	I0729 18:32:09.617604   77627 system_pods.go:74] duration metric: took 176.958452ms to wait for pod list to return data ...
	I0729 18:32:09.617614   77627 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:32:09.813846   77627 default_sa.go:45] found service account: "default"
	I0729 18:32:09.813871   77627 default_sa.go:55] duration metric: took 196.249412ms for default service account to be created ...
	I0729 18:32:09.813886   77627 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:32:10.019167   77627 system_pods.go:86] 9 kube-system pods found
	I0729 18:32:10.019199   77627 system_pods.go:89] "coredns-7db6d8ff4d-wpnfg" [687cbc8f-370a-4b72-bc1c-6ae36efe890e] Running
	I0729 18:32:10.019208   77627 system_pods.go:89] "coredns-7db6d8ff4d-wztpj" [1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7] Running
	I0729 18:32:10.019214   77627 system_pods.go:89] "etcd-embed-certs-409322" [68de54c3-7d47-4e79-a064-08b013b1d910] Running
	I0729 18:32:10.019220   77627 system_pods.go:89] "kube-apiserver-embed-certs-409322" [dc1a0568-ef7c-493f-91fb-7438456daf6d] Running
	I0729 18:32:10.019227   77627 system_pods.go:89] "kube-controller-manager-embed-certs-409322" [da715e8c-2437-487b-b4e0-c93af2f079f7] Running
	I0729 18:32:10.019233   77627 system_pods.go:89] "kube-proxy-kxf5z" [74ed1812-b3bf-429d-b8f1-bdccb3415fb5] Running
	I0729 18:32:10.019239   77627 system_pods.go:89] "kube-scheduler-embed-certs-409322" [188cf21a-9a8a-45de-9a91-9e593626ce6d] Running
	I0729 18:32:10.019249   77627 system_pods.go:89] "metrics-server-569cc877fc-6q4nl" [57dc61cc-7490-49e5-9d03-c81aa5d25aea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:32:10.019257   77627 system_pods.go:89] "storage-provisioner" [b0b1e31d-9b5c-4e82-aea7-56184832c053] Running
	I0729 18:32:10.019267   77627 system_pods.go:126] duration metric: took 205.375742ms to wait for k8s-apps to be running ...
	I0729 18:32:10.019278   77627 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:32:10.019326   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:32:10.034632   77627 system_svc.go:56] duration metric: took 15.345747ms WaitForService to wait for kubelet
	I0729 18:32:10.034659   77627 kubeadm.go:582] duration metric: took 12.336145267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:32:10.034687   77627 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:32:10.214205   77627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:32:10.214240   77627 node_conditions.go:123] node cpu capacity is 2
	I0729 18:32:10.214255   77627 node_conditions.go:105] duration metric: took 179.559492ms to run NodePressure ...
	I0729 18:32:10.214269   77627 start.go:241] waiting for startup goroutines ...
	I0729 18:32:10.214279   77627 start.go:246] waiting for cluster config update ...
	I0729 18:32:10.214297   77627 start.go:255] writing updated cluster config ...
	I0729 18:32:10.214639   77627 ssh_runner.go:195] Run: rm -f paused
	I0729 18:32:10.264858   77627 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:32:10.266718   77627 out.go:177] * Done! kubectl is now configured to use "embed-certs-409322" cluster and "default" namespace by default
	I0729 18:32:07.934519   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:10.434593   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:13.262907   78080 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:32:13.263487   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:13.263679   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:12.934686   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:13.928481   77394 pod_ready.go:81] duration metric: took 4m0.00080059s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" ...
	E0729 18:32:13.928509   77394 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 18:32:13.928528   77394 pod_ready.go:38] duration metric: took 4m10.042077465s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:13.928554   77394 kubeadm.go:597] duration metric: took 4m18.205651497s to restartPrimaryControlPlane
	W0729 18:32:13.928623   77394 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:32:13.928649   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:32:18.264261   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:18.264554   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:28.265190   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:28.265433   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:40.226240   77394 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.297571665s)
	I0729 18:32:40.226316   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:32:40.243407   77394 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:32:40.254946   77394 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:32:40.264608   77394 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:32:40.264631   77394 kubeadm.go:157] found existing configuration files:
	
	I0729 18:32:40.264675   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:32:40.274180   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:32:40.274231   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:32:40.283752   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:32:40.293163   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:32:40.293232   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:32:40.302533   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:32:40.311972   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:32:40.312024   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:32:40.321513   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:32:40.330546   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:32:40.330599   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:32:40.340190   77394 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:32:40.389517   77394 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 18:32:40.389592   77394 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:32:40.508682   77394 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:32:40.508783   77394 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:32:40.508859   77394 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 18:32:40.517673   77394 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:32:40.520623   77394 out.go:204]   - Generating certificates and keys ...
	I0729 18:32:40.520726   77394 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:32:40.520824   77394 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:32:40.520893   77394 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:32:40.520961   77394 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:32:40.521045   77394 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:32:40.521094   77394 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:32:40.521171   77394 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:32:40.521254   77394 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:32:40.521357   77394 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:32:40.521475   77394 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:32:40.521535   77394 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:32:40.521606   77394 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:32:40.615870   77394 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:32:40.837902   77394 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:32:40.924418   77394 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:32:41.068573   77394 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:32:41.287201   77394 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:32:41.287991   77394 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:32:41.293523   77394 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:32:41.295211   77394 out.go:204]   - Booting up control plane ...
	I0729 18:32:41.295329   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:32:41.295455   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:32:41.295560   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:32:41.317802   77394 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:32:41.324522   77394 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:32:41.324589   77394 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:32:41.463007   77394 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:32:41.463116   77394 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:32:41.982144   77394 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 519.208408ms
	I0729 18:32:41.982263   77394 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:32:46.983564   77394 kubeadm.go:310] [api-check] The API server is healthy after 5.001335599s
	I0729 18:32:46.999811   77394 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:32:47.018194   77394 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:32:47.051359   77394 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:32:47.051564   77394 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-888056 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:32:47.062615   77394 kubeadm.go:310] [bootstrap-token] Using token: a14u5x.5d4oe8yqdl9tiifc
	I0729 18:32:47.064051   77394 out.go:204]   - Configuring RBAC rules ...
	I0729 18:32:47.064187   77394 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:32:47.071856   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:32:47.084985   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:32:47.088622   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:32:47.091797   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:32:47.096194   77394 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:32:47.391394   77394 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:32:47.834314   77394 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:32:48.394665   77394 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:32:48.394689   77394 kubeadm.go:310] 
	I0729 18:32:48.394763   77394 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:32:48.394797   77394 kubeadm.go:310] 
	I0729 18:32:48.394928   77394 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:32:48.394941   77394 kubeadm.go:310] 
	I0729 18:32:48.394979   77394 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:32:48.395058   77394 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:32:48.395126   77394 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:32:48.395141   77394 kubeadm.go:310] 
	I0729 18:32:48.395221   77394 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:32:48.395230   77394 kubeadm.go:310] 
	I0729 18:32:48.395297   77394 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:32:48.395306   77394 kubeadm.go:310] 
	I0729 18:32:48.395374   77394 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:32:48.395467   77394 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:32:48.395554   77394 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:32:48.395563   77394 kubeadm.go:310] 
	I0729 18:32:48.395652   77394 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:32:48.395766   77394 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:32:48.395778   77394 kubeadm.go:310] 
	I0729 18:32:48.395886   77394 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a14u5x.5d4oe8yqdl9tiifc \
	I0729 18:32:48.396030   77394 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 18:32:48.396062   77394 kubeadm.go:310] 	--control-plane 
	I0729 18:32:48.396071   77394 kubeadm.go:310] 
	I0729 18:32:48.396191   77394 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:32:48.396200   77394 kubeadm.go:310] 
	I0729 18:32:48.396276   77394 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a14u5x.5d4oe8yqdl9tiifc \
	I0729 18:32:48.396393   77394 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 18:32:48.397540   77394 kubeadm.go:310] W0729 18:32:40.358164    2949 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 18:32:48.397921   77394 kubeadm.go:310] W0729 18:32:40.359840    2949 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 18:32:48.398071   77394 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:32:48.398090   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:32:48.398099   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:32:48.399641   77394 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:32:48.266531   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:48.266736   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:48.400846   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:32:48.412594   77394 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:32:48.434792   77394 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:32:48.434872   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:48.434907   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-888056 minikube.k8s.io/updated_at=2024_07_29T18_32_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=no-preload-888056 minikube.k8s.io/primary=true
	I0729 18:32:48.672892   77394 ops.go:34] apiserver oom_adj: -16
	I0729 18:32:48.673144   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:49.173811   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:49.673775   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:50.173717   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:50.673774   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:51.174068   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:51.673565   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:52.173431   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:52.673602   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:53.173912   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:53.315565   77394 kubeadm.go:1113] duration metric: took 4.880757535s to wait for elevateKubeSystemPrivileges
	I0729 18:32:53.315609   77394 kubeadm.go:394] duration metric: took 4m57.645527986s to StartCluster
	I0729 18:32:53.315633   77394 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:32:53.315736   77394 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:32:53.317360   77394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:32:53.317579   77394 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:32:53.317669   77394 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:32:53.317784   77394 addons.go:69] Setting storage-provisioner=true in profile "no-preload-888056"
	I0729 18:32:53.317820   77394 addons.go:234] Setting addon storage-provisioner=true in "no-preload-888056"
	I0729 18:32:53.317817   77394 addons.go:69] Setting default-storageclass=true in profile "no-preload-888056"
	W0729 18:32:53.317835   77394 addons.go:243] addon storage-provisioner should already be in state true
	I0729 18:32:53.317840   77394 config.go:182] Loaded profile config "no-preload-888056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:32:53.317836   77394 addons.go:69] Setting metrics-server=true in profile "no-preload-888056"
	I0729 18:32:53.317861   77394 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-888056"
	I0729 18:32:53.317878   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.317882   77394 addons.go:234] Setting addon metrics-server=true in "no-preload-888056"
	W0729 18:32:53.317892   77394 addons.go:243] addon metrics-server should already be in state true
	I0729 18:32:53.317927   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.318302   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318308   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318334   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.318345   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.318301   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318441   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.319022   77394 out.go:177] * Verifying Kubernetes components...
	I0729 18:32:53.320383   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:32:53.335666   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0729 18:32:53.336170   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.336860   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.336896   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.337301   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.338104   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I0729 18:32:53.338137   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40655
	I0729 18:32:53.338545   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.338559   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.338595   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.338614   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.339076   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.339094   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.339163   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.339188   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.339510   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.340089   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.340126   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.340346   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.340557   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.344286   77394 addons.go:234] Setting addon default-storageclass=true in "no-preload-888056"
	W0729 18:32:53.344307   77394 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:32:53.344335   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.344702   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.344727   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.356006   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33765
	I0729 18:32:53.356613   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.357135   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.357159   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.357517   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.357604   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0729 18:32:53.357752   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.358011   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.358472   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.358490   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.358898   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.359110   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.359546   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.360493   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.361662   77394 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:32:53.362464   77394 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:32:53.363294   77394 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:32:53.363311   77394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:32:53.363331   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.364170   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:32:53.364182   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I0729 18:32:53.364186   77394 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:32:53.364205   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.364560   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.365040   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.365061   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.365515   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.365963   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.365983   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.367883   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.368768   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369264   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.369284   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369576   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.369591   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369858   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.369964   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.370009   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.370102   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.370169   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.370198   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.370317   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.370344   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.382571   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0729 18:32:53.382940   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.383311   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.383336   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.383748   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.383946   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.385570   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.385761   77394 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:32:53.385775   77394 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:32:53.385792   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.388411   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.388756   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.388774   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.389017   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.389193   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.389350   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.389463   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.585542   77394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:32:53.645556   77394 node_ready.go:35] waiting up to 6m0s for node "no-preload-888056" to be "Ready" ...
	I0729 18:32:53.657965   77394 node_ready.go:49] node "no-preload-888056" has status "Ready":"True"
	I0729 18:32:53.657997   77394 node_ready.go:38] duration metric: took 12.408834ms for node "no-preload-888056" to be "Ready" ...
	I0729 18:32:53.658010   77394 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:53.673068   77394 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:53.724224   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:32:53.724248   77394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:32:53.763536   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:32:53.774123   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:32:53.812615   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:32:53.812639   77394 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:32:53.945274   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:32:53.945303   77394 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:32:54.107180   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:32:54.184354   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.184379   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.184699   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.184748   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.184762   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.184776   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.184786   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.185015   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.185043   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.185077   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.244759   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.244781   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.245108   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.245156   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.245169   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.782604   77394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.008443119s)
	I0729 18:32:54.782663   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.782676   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.782990   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.783010   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.783020   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.783028   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.783265   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.783283   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946051   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.946074   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.946396   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.946418   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946430   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.946439   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.946680   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.946698   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946710   77394 addons.go:475] Verifying addon metrics-server=true in "no-preload-888056"
	I0729 18:32:54.948362   77394 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 18:32:54.949821   77394 addons.go:510] duration metric: took 1.632153415s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 18:32:55.679655   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:57.680175   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:33:00.179877   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:33:01.180068   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.180094   77394 pod_ready.go:81] duration metric: took 7.506992362s for pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.180106   77394 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.185742   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.185760   77394 pod_ready.go:81] duration metric: took 5.647157ms for pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.185769   77394 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.190056   77394 pod_ready.go:92] pod "etcd-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.190077   77394 pod_ready.go:81] duration metric: took 4.30181ms for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.190085   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.194255   77394 pod_ready.go:92] pod "kube-apiserver-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.194273   77394 pod_ready.go:81] duration metric: took 4.182006ms for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.194284   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.199056   77394 pod_ready.go:92] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.199072   77394 pod_ready.go:81] duration metric: took 4.779158ms for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.199081   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-94ff9" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.578279   77394 pod_ready.go:92] pod "kube-proxy-94ff9" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.578299   77394 pod_ready.go:81] duration metric: took 379.211109ms for pod "kube-proxy-94ff9" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.578308   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:02.378184   77394 pod_ready.go:92] pod "kube-scheduler-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:02.378205   77394 pod_ready.go:81] duration metric: took 799.890202ms for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:02.378212   77394 pod_ready.go:38] duration metric: took 8.720189182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:33:02.378226   77394 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:33:02.378282   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:33:02.396023   77394 api_server.go:72] duration metric: took 9.07841179s to wait for apiserver process to appear ...
	I0729 18:33:02.396050   77394 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:33:02.396070   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:33:02.403736   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 200:
	ok
	I0729 18:33:02.404828   77394 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 18:33:02.404850   77394 api_server.go:131] duration metric: took 8.793481ms to wait for apiserver health ...
	I0729 18:33:02.404858   77394 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:33:02.580656   77394 system_pods.go:59] 9 kube-system pods found
	I0729 18:33:02.580683   77394 system_pods.go:61] "coredns-5cfdc65f69-bbh6c" [66b43af3-78eb-437f-81d7-eedb4cc34349] Running
	I0729 18:33:02.580687   77394 system_pods.go:61] "coredns-5cfdc65f69-j9ddw" [679f8750-86aa-4e00-8291-6996b54b1930] Running
	I0729 18:33:02.580691   77394 system_pods.go:61] "etcd-no-preload-888056" [abcd648d-659a-4f02-a769-f2222eaac945] Running
	I0729 18:33:02.580695   77394 system_pods.go:61] "kube-apiserver-no-preload-888056" [99a48803-06b1-44a6-a0cc-f28f2ba7235f] Running
	I0729 18:33:02.580699   77394 system_pods.go:61] "kube-controller-manager-no-preload-888056" [6bb3d64c-9fef-41ee-a68d-170fac01dec5] Running
	I0729 18:33:02.580702   77394 system_pods.go:61] "kube-proxy-94ff9" [dd06899e-3d54-4b71-bda6-f8c6d06ce100] Running
	I0729 18:33:02.580704   77394 system_pods.go:61] "kube-scheduler-no-preload-888056" [a1b60226-df5e-45ce-8382-a8d277278129] Running
	I0729 18:33:02.580710   77394 system_pods.go:61] "metrics-server-78fcd8795b-9qqmj" [45bbbaf3-cf3e-4db1-9eec-693425bc5dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:33:02.580714   77394 system_pods.go:61] "storage-provisioner" [0aacb67c-abea-47fb-a2f1-f1245e68599a] Running
	I0729 18:33:02.580721   77394 system_pods.go:74] duration metric: took 175.857868ms to wait for pod list to return data ...
	I0729 18:33:02.580728   77394 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:33:02.778962   77394 default_sa.go:45] found service account: "default"
	I0729 18:33:02.778987   77394 default_sa.go:55] duration metric: took 198.250326ms for default service account to be created ...
	I0729 18:33:02.778995   77394 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:33:02.981123   77394 system_pods.go:86] 9 kube-system pods found
	I0729 18:33:02.981159   77394 system_pods.go:89] "coredns-5cfdc65f69-bbh6c" [66b43af3-78eb-437f-81d7-eedb4cc34349] Running
	I0729 18:33:02.981166   77394 system_pods.go:89] "coredns-5cfdc65f69-j9ddw" [679f8750-86aa-4e00-8291-6996b54b1930] Running
	I0729 18:33:02.981175   77394 system_pods.go:89] "etcd-no-preload-888056" [abcd648d-659a-4f02-a769-f2222eaac945] Running
	I0729 18:33:02.981181   77394 system_pods.go:89] "kube-apiserver-no-preload-888056" [99a48803-06b1-44a6-a0cc-f28f2ba7235f] Running
	I0729 18:33:02.981186   77394 system_pods.go:89] "kube-controller-manager-no-preload-888056" [6bb3d64c-9fef-41ee-a68d-170fac01dec5] Running
	I0729 18:33:02.981190   77394 system_pods.go:89] "kube-proxy-94ff9" [dd06899e-3d54-4b71-bda6-f8c6d06ce100] Running
	I0729 18:33:02.981196   77394 system_pods.go:89] "kube-scheduler-no-preload-888056" [a1b60226-df5e-45ce-8382-a8d277278129] Running
	I0729 18:33:02.981206   77394 system_pods.go:89] "metrics-server-78fcd8795b-9qqmj" [45bbbaf3-cf3e-4db1-9eec-693425bc5dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:33:02.981214   77394 system_pods.go:89] "storage-provisioner" [0aacb67c-abea-47fb-a2f1-f1245e68599a] Running
	I0729 18:33:02.981228   77394 system_pods.go:126] duration metric: took 202.226569ms to wait for k8s-apps to be running ...
	I0729 18:33:02.981239   77394 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:33:02.981290   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:33:02.999134   77394 system_svc.go:56] duration metric: took 17.878004ms WaitForService to wait for kubelet
	I0729 18:33:02.999169   77394 kubeadm.go:582] duration metric: took 9.681562891s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:33:02.999187   77394 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:33:03.179246   77394 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:33:03.179274   77394 node_conditions.go:123] node cpu capacity is 2
	I0729 18:33:03.179286   77394 node_conditions.go:105] duration metric: took 180.093491ms to run NodePressure ...
	I0729 18:33:03.179312   77394 start.go:241] waiting for startup goroutines ...
	I0729 18:33:03.179322   77394 start.go:246] waiting for cluster config update ...
	I0729 18:33:03.179344   77394 start.go:255] writing updated cluster config ...
	I0729 18:33:03.179658   77394 ssh_runner.go:195] Run: rm -f paused
	I0729 18:33:03.228664   77394 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 18:33:03.230706   77394 out.go:177] * Done! kubectl is now configured to use "no-preload-888056" cluster and "default" namespace by default
	I0729 18:33:28.269122   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:33:28.269375   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:33:28.269399   78080 kubeadm.go:310] 
	I0729 18:33:28.269433   78080 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:33:28.269471   78080 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:33:28.269480   78080 kubeadm.go:310] 
	I0729 18:33:28.269508   78080 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:33:28.269541   78080 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:33:28.269686   78080 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:33:28.269698   78080 kubeadm.go:310] 
	I0729 18:33:28.269846   78080 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:33:28.269902   78080 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:33:28.269946   78080 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:33:28.269969   78080 kubeadm.go:310] 
	I0729 18:33:28.270132   78080 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:33:28.270246   78080 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:33:28.270258   78080 kubeadm.go:310] 
	I0729 18:33:28.270434   78080 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:33:28.270567   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:33:28.270674   78080 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:33:28.270774   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:33:28.270784   78080 kubeadm.go:310] 
	I0729 18:33:28.271347   78080 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:33:28.271428   78080 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:33:28.271503   78080 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 18:33:28.271650   78080 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 18:33:28.271713   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:33:28.743675   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:33:28.759228   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:33:28.768522   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:33:28.768546   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:33:28.768593   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:33:28.777423   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:33:28.777481   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:33:28.786450   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:33:28.795335   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:33:28.795386   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:33:28.804519   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:33:28.813137   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:33:28.813193   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:33:28.822053   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:33:28.830463   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:33:28.830513   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:33:28.839818   78080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:33:29.066010   78080 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:35:25.197434   78080 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:35:25.197566   78080 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 18:35:25.199476   78080 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:35:25.199554   78080 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:35:25.199667   78080 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:35:25.199800   78080 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:35:25.199937   78080 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:35:25.200054   78080 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:35:25.201801   78080 out.go:204]   - Generating certificates and keys ...
	I0729 18:35:25.201875   78080 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:35:25.201944   78080 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:35:25.202073   78080 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:35:25.202136   78080 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:35:25.202231   78080 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:35:25.202287   78080 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:35:25.202339   78080 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:35:25.202426   78080 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:35:25.202492   78080 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:35:25.202560   78080 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:35:25.202603   78080 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:35:25.202692   78080 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:35:25.202779   78080 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:35:25.202863   78080 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:35:25.202962   78080 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:35:25.203070   78080 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:35:25.203213   78080 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:35:25.203289   78080 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:35:25.203323   78080 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:35:25.203381   78080 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:35:25.204837   78080 out.go:204]   - Booting up control plane ...
	I0729 18:35:25.204920   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:35:25.204985   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:35:25.205053   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:35:25.205146   78080 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:35:25.205274   78080 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:35:25.205316   78080 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:35:25.205379   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.205591   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.205658   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.205828   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.205926   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206142   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206204   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206411   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206488   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206683   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206698   78080 kubeadm.go:310] 
	I0729 18:35:25.206755   78080 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:35:25.206817   78080 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:35:25.206827   78080 kubeadm.go:310] 
	I0729 18:35:25.206860   78080 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:35:25.206890   78080 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:35:25.206975   78080 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:35:25.206985   78080 kubeadm.go:310] 
	I0729 18:35:25.207099   78080 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:35:25.207134   78080 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:35:25.207167   78080 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:35:25.207177   78080 kubeadm.go:310] 
	I0729 18:35:25.207289   78080 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:35:25.207403   78080 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:35:25.207412   78080 kubeadm.go:310] 
	I0729 18:35:25.207532   78080 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:35:25.207640   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:35:25.207754   78080 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:35:25.207821   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:35:25.207854   78080 kubeadm.go:310] 
	I0729 18:35:25.207886   78080 kubeadm.go:394] duration metric: took 7m57.080498205s to StartCluster
	I0729 18:35:25.207923   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:35:25.207983   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:35:25.251803   78080 cri.go:89] found id: ""
	I0729 18:35:25.251841   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.251852   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:35:25.251859   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:35:25.251920   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:35:25.287842   78080 cri.go:89] found id: ""
	I0729 18:35:25.287877   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.287895   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:35:25.287903   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:35:25.287967   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:35:25.324546   78080 cri.go:89] found id: ""
	I0729 18:35:25.324573   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.324582   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:35:25.324588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:35:25.324634   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:35:25.375723   78080 cri.go:89] found id: ""
	I0729 18:35:25.375746   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.375753   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:35:25.375759   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:35:25.375812   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:35:25.412580   78080 cri.go:89] found id: ""
	I0729 18:35:25.412604   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.412612   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:35:25.412617   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:35:25.412664   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:35:25.449360   78080 cri.go:89] found id: ""
	I0729 18:35:25.449397   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.449406   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:35:25.449413   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:35:25.449464   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:35:25.485655   78080 cri.go:89] found id: ""
	I0729 18:35:25.485687   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.485698   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:35:25.485705   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:35:25.485769   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:35:25.521752   78080 cri.go:89] found id: ""
	I0729 18:35:25.521776   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.521783   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:35:25.521792   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:35:25.521808   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:35:25.562894   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:35:25.562922   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:35:25.623879   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:35:25.623912   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:35:25.647315   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:35:25.647341   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:35:25.744827   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:35:25.744850   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:35:25.744865   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0729 18:35:25.849394   78080 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 18:35:25.849445   78080 out.go:239] * 
	W0729 18:35:25.849520   78080 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:35:25.849558   78080 out.go:239] * 
	W0729 18:35:25.850438   78080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:35:25.853770   78080 out.go:177] 
	W0729 18:35:25.854982   78080 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:35:25.855035   78080 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 18:35:25.855060   78080 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 18:35:25.856444   78080 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.896119303Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278670896094135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9001ce79-b188-4b75-aaab-25ce9ccd4211 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.896626269Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=242f6d5b-2d4f-45eb-b6ef-f384d48ae10b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.896692527Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=242f6d5b-2d4f-45eb-b6ef-f384d48ae10b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.896780029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=242f6d5b-2d4f-45eb-b6ef-f384d48ae10b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.928439777Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=90a61f5b-ce55-4b57-9e10-26ad9c62219c name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.928539722Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=90a61f5b-ce55-4b57-9e10-26ad9c62219c name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.929966684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2132bbc1-0a63-4799-bbcb-95c04a105a81 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.930420156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278670930388562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2132bbc1-0a63-4799-bbcb-95c04a105a81 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.930880406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14f74c05-af89-43f1-8e82-66ce6f487f91 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.930929211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14f74c05-af89-43f1-8e82-66ce6f487f91 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.930958900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=14f74c05-af89-43f1-8e82-66ce6f487f91 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.961294022Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c27cfa00-7d3b-400b-8b01-08e99baf28bd name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.961388861Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c27cfa00-7d3b-400b-8b01-08e99baf28bd name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.962416822Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68945548-0fb8-410a-a313-4a8f49b77932 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.962859987Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278670962839467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68945548-0fb8-410a-a313-4a8f49b77932 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.963664996Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec23b195-3add-484b-b903-195c726953a3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.963776844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec23b195-3add-484b-b903-195c726953a3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.963811656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ec23b195-3add-484b-b903-195c726953a3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.995280580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0861178-c3e1-4846-b361-28879a876d24 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.995362477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0861178-c3e1-4846-b361-28879a876d24 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.996566063Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8cde088a-688e-4d64-92d7-54aec718cf88 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.997070844Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278670997048782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cde088a-688e-4d64-92d7-54aec718cf88 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.997613628Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1946cc0-e028-4339-bd69-aa66de2a35aa name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.997683086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1946cc0-e028-4339-bd69-aa66de2a35aa name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:30 old-k8s-version-386663 crio[651]: time="2024-07-29 18:44:30.997786459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f1946cc0-e028-4339-bd69-aa66de2a35aa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 18:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053138] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049545] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.032104] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.544033] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.650966] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.461872] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.060757] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073737] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.211657] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.138817] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.279940] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.393435] systemd-fstab-generator[838]: Ignoring "noauto" option for root device
	[  +0.066263] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.863430] systemd-fstab-generator[963]: Ignoring "noauto" option for root device
	[ +12.812099] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 18:31] systemd-fstab-generator[5034]: Ignoring "noauto" option for root device
	[Jul29 18:33] systemd-fstab-generator[5313]: Ignoring "noauto" option for root device
	[  +0.068948] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:44:31 up 17 min,  0 users,  load average: 0.03, 0.04, 0.04
	Linux old-k8s-version-386663 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/config/file.go:95 +0x5b
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]: goroutine 135 [select]:
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]: k8s.io/kubernetes/pkg/kubelet/config.(*sourceFile).doWatch(0xc0003719a0, 0x0, 0x0)
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/config/file_linux.go:91 +0x3c5
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]: k8s.io/kubernetes/pkg/kubelet/config.(*sourceFile).startWatch.func1()
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/config/file_linux.go:59 +0xb6
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000b7f260)
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b7f260, 0x4f0ac40, 0xc000bc6630, 0x1, 0xc00009e0c0)
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b7f260, 0x3b9aca00, 0x0, 0x1, 0xc00009e0c0)
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0xc000b7f260, 0x3b9aca00)
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]: created by k8s.io/kubernetes/pkg/kubelet/config.(*sourceFile).startWatch
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/config/file_linux.go:54 +0x116
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]: goroutine 136 [chan receive]:
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]: k8s.io/kubernetes/pkg/util/config.(*Mux).listen(0xc000b7f080, 0x48ab35a, 0x3, 0xc000b9c960)
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/config/config.go:82 +0x89
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]: k8s.io/kubernetes/pkg/util/config.(*Mux).Channel.func1()
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/config/config.go:77 +0x45
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000b7f290)
	Jul 29 18:44:31 old-k8s-version-386663 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386663 -n old-k8s-version-386663
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386663 -n old-k8s-version-386663: exit status 2 (225.586079ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-386663" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (458.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-502055 -n default-k8s-diff-port-502055
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 18:48:21.266590656 +0000 UTC m=+6757.933958976
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-502055 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-502055 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.517µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-502055 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-502055 -n default-k8s-diff-port-502055
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-502055 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-502055 logs -n 25: (1.118815533s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:19 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-888056             | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-888056                                   | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-409322            | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-502055  | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC |                     |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-386663        | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-888056                  | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-888056 --memory=2200                     | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:33 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-409322                 | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-502055       | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:31 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386663             | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	| start   | -p newest-cni-903256 --memory=2200 --alsologtostderr   | newest-cni-903256            | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:47 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-903256             | newest-cni-903256            | jenkins | v1.33.1 | 29 Jul 24 18:47 UTC | 29 Jul 24 18:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-903256                                   | newest-cni-903256            | jenkins | v1.33.1 | 29 Jul 24 18:47 UTC | 29 Jul 24 18:47 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-888056                                   | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:47 UTC | 29 Jul 24 18:47 UTC |
	| addons  | enable dashboard -p newest-cni-903256                  | newest-cni-903256            | jenkins | v1.33.1 | 29 Jul 24 18:47 UTC | 29 Jul 24 18:47 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-903256 --memory=2200 --alsologtostderr   | newest-cni-903256            | jenkins | v1.33.1 | 29 Jul 24 18:47 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:47 UTC | 29 Jul 24 18:47 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:47:29
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:47:29.018625   85077 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:47:29.018928   85077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:47:29.018941   85077 out.go:304] Setting ErrFile to fd 2...
	I0729 18:47:29.018947   85077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:47:29.019238   85077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:47:29.019919   85077 out.go:298] Setting JSON to false
	I0729 18:47:29.021145   85077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9001,"bootTime":1722269848,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:47:29.021221   85077 start.go:139] virtualization: kvm guest
	I0729 18:47:29.023563   85077 out.go:177] * [newest-cni-903256] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:47:29.024828   85077 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 18:47:29.024830   85077 notify.go:220] Checking for updates...
	I0729 18:47:29.027235   85077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:47:29.028538   85077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:47:29.029754   85077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:47:29.030855   85077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:47:29.031831   85077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:47:29.033261   85077 config.go:182] Loaded profile config "newest-cni-903256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:47:29.033744   85077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:47:29.033800   85077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:47:29.049700   85077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37033
	I0729 18:47:29.050165   85077 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:47:29.050850   85077 main.go:141] libmachine: Using API Version  1
	I0729 18:47:29.050882   85077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:47:29.051272   85077 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:47:29.051489   85077 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:47:29.051771   85077 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:47:29.052106   85077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:47:29.052139   85077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:47:29.066843   85077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35469
	I0729 18:47:29.067310   85077 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:47:29.067824   85077 main.go:141] libmachine: Using API Version  1
	I0729 18:47:29.067849   85077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:47:29.068172   85077 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:47:29.068452   85077 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:47:29.107245   85077 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 18:47:29.108459   85077 start.go:297] selected driver: kvm2
	I0729 18:47:29.108485   85077 start.go:901] validating driver "kvm2" against &{Name:newest-cni-903256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-903256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:47:29.108583   85077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:47:29.109226   85077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:47:29.109301   85077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:47:29.124214   85077 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:47:29.124720   85077 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 18:47:29.124755   85077 cni.go:84] Creating CNI manager for ""
	I0729 18:47:29.124766   85077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:47:29.124820   85077 start.go:340] cluster config:
	{Name:newest-cni-903256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-903256 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:47:29.124974   85077 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:47:29.126731   85077 out.go:177] * Starting "newest-cni-903256" primary control-plane node in "newest-cni-903256" cluster
	I0729 18:47:29.127995   85077 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 18:47:29.128037   85077 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:47:29.128047   85077 cache.go:56] Caching tarball of preloaded images
	I0729 18:47:29.128135   85077 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:47:29.128147   85077 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 18:47:29.128271   85077 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/config.json ...
	I0729 18:47:29.128481   85077 start.go:360] acquireMachinesLock for newest-cni-903256: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:47:29.128526   85077 start.go:364] duration metric: took 24.033µs to acquireMachinesLock for "newest-cni-903256"
	I0729 18:47:29.128540   85077 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:47:29.128545   85077 fix.go:54] fixHost starting: 
	I0729 18:47:29.128885   85077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:47:29.128919   85077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:47:29.144430   85077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36729
	I0729 18:47:29.144884   85077 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:47:29.145365   85077 main.go:141] libmachine: Using API Version  1
	I0729 18:47:29.145387   85077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:47:29.145673   85077 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:47:29.145899   85077 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:47:29.146051   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetState
	I0729 18:47:29.147714   85077 fix.go:112] recreateIfNeeded on newest-cni-903256: state=Stopped err=<nil>
	I0729 18:47:29.147748   85077 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	W0729 18:47:29.147913   85077 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:47:29.149299   85077 out.go:177] * Restarting existing kvm2 VM for "newest-cni-903256" ...
	I0729 18:47:29.150661   85077 main.go:141] libmachine: (newest-cni-903256) Calling .Start
	I0729 18:47:29.150864   85077 main.go:141] libmachine: (newest-cni-903256) Ensuring networks are active...
	I0729 18:47:29.151689   85077 main.go:141] libmachine: (newest-cni-903256) Ensuring network default is active
	I0729 18:47:29.152124   85077 main.go:141] libmachine: (newest-cni-903256) Ensuring network mk-newest-cni-903256 is active
	I0729 18:47:29.152556   85077 main.go:141] libmachine: (newest-cni-903256) Getting domain xml...
	I0729 18:47:29.153327   85077 main.go:141] libmachine: (newest-cni-903256) Creating domain...
	I0729 18:47:31.447152   85077 main.go:141] libmachine: (newest-cni-903256) Waiting to get IP...
	I0729 18:47:31.448132   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:31.448587   85077 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:47:31.448665   85077 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:47:31.448554   85112 retry.go:31] will retry after 235.441796ms: waiting for machine to come up
	I0729 18:47:31.686219   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:31.686776   85077 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:47:31.686798   85077 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:47:31.686749   85112 retry.go:31] will retry after 293.393202ms: waiting for machine to come up
	I0729 18:47:31.982422   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:31.982958   85077 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:47:31.982985   85077 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:47:31.982913   85112 retry.go:31] will retry after 308.310307ms: waiting for machine to come up
	I0729 18:47:32.292323   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:32.292887   85077 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:47:32.292928   85077 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:47:32.292846   85112 retry.go:31] will retry after 603.662385ms: waiting for machine to come up
	I0729 18:47:32.898694   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:32.899230   85077 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:47:32.899260   85077 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:47:32.899191   85112 retry.go:31] will retry after 540.496046ms: waiting for machine to come up
	I0729 18:47:33.440942   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:33.441362   85077 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:47:33.441383   85077 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:47:33.441342   85112 retry.go:31] will retry after 915.435437ms: waiting for machine to come up
	I0729 18:47:34.358934   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:34.359506   85077 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:47:34.359524   85077 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:47:34.359460   85112 retry.go:31] will retry after 761.510361ms: waiting for machine to come up
	I0729 18:47:35.122291   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:35.122714   85077 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:47:35.122739   85077 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:47:35.122674   85112 retry.go:31] will retry after 1.403872466s: waiting for machine to come up
	I0729 18:47:36.528299   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:36.528821   85077 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:47:36.528845   85077 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:47:36.528794   85112 retry.go:31] will retry after 1.398247551s: waiting for machine to come up
	I0729 18:47:37.929271   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:37.929907   85077 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:47:37.929932   85077 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:47:37.929860   85112 retry.go:31] will retry after 1.564552261s: waiting for machine to come up
	I0729 18:47:39.496522   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:39.497006   85077 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:47:39.497031   85077 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:47:39.496972   85112 retry.go:31] will retry after 1.851559818s: waiting for machine to come up
	I0729 18:47:41.350720   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:41.351334   85077 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:47:41.351375   85077 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:47:41.351281   85112 retry.go:31] will retry after 2.424751596s: waiting for machine to come up
	I0729 18:47:43.779009   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:43.779450   85077 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:47:43.779471   85077 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:47:43.779411   85112 retry.go:31] will retry after 3.146855468s: waiting for machine to come up
	I0729 18:47:46.929950   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:46.930382   85077 main.go:141] libmachine: (newest-cni-903256) Found IP for machine: 192.168.50.148
	I0729 18:47:46.930428   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has current primary IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:46.930442   85077 main.go:141] libmachine: (newest-cni-903256) Reserving static IP address...
	I0729 18:47:46.930920   85077 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "newest-cni-903256", mac: "52:54:00:b7:b1:4e", ip: "192.168.50.148"} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:47:41 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:46.930955   85077 main.go:141] libmachine: (newest-cni-903256) DBG | skip adding static IP to network mk-newest-cni-903256 - found existing host DHCP lease matching {name: "newest-cni-903256", mac: "52:54:00:b7:b1:4e", ip: "192.168.50.148"}
	I0729 18:47:46.930968   85077 main.go:141] libmachine: (newest-cni-903256) Reserved static IP address: 192.168.50.148
	I0729 18:47:46.930984   85077 main.go:141] libmachine: (newest-cni-903256) Waiting for SSH to be available...
	I0729 18:47:46.930993   85077 main.go:141] libmachine: (newest-cni-903256) DBG | Getting to WaitForSSH function...
	I0729 18:47:46.933400   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:46.933736   85077 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:47:41 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:46.933763   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:46.933892   85077 main.go:141] libmachine: (newest-cni-903256) DBG | Using SSH client type: external
	I0729 18:47:46.933911   85077 main.go:141] libmachine: (newest-cni-903256) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa (-rw-------)
	I0729 18:47:46.933935   85077 main.go:141] libmachine: (newest-cni-903256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:47:46.933948   85077 main.go:141] libmachine: (newest-cni-903256) DBG | About to run SSH command:
	I0729 18:47:46.933980   85077 main.go:141] libmachine: (newest-cni-903256) DBG | exit 0
	I0729 18:47:47.058231   85077 main.go:141] libmachine: (newest-cni-903256) DBG | SSH cmd err, output: <nil>: 
	I0729 18:47:47.058624   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetConfigRaw
	I0729 18:47:47.059214   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetIP
	I0729 18:47:47.061657   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:47.061971   85077 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:47:41 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:47.062001   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:47.062305   85077 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/config.json ...
	I0729 18:47:47.062541   85077 machine.go:94] provisionDockerMachine start ...
	I0729 18:47:47.062564   85077 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:47:47.062744   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:47:47.064655   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:47.064950   85077 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:47:41 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:47.064972   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:47.065033   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:47:47.065226   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:47.065362   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:47.065495   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:47:47.065777   85077 main.go:141] libmachine: Using SSH client type: native
	I0729 18:47:47.066014   85077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:47:47.066028   85077 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:47:47.170192   85077 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:47:47.170215   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetMachineName
	I0729 18:47:47.170525   85077 buildroot.go:166] provisioning hostname "newest-cni-903256"
	I0729 18:47:47.170554   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetMachineName
	I0729 18:47:47.170734   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:47:47.173191   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:47.173594   85077 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:47:41 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:47.173626   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:47.173787   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:47:47.173986   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:47.174139   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:47.174267   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:47:47.174482   85077 main.go:141] libmachine: Using SSH client type: native
	I0729 18:47:47.174652   85077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:47:47.174666   85077 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-903256 && echo "newest-cni-903256" | sudo tee /etc/hostname
	I0729 18:47:47.296786   85077 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-903256
	
	I0729 18:47:47.296818   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:47:47.299582   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:47.299943   85077 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:47:41 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:47.299969   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:47.300169   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:47:47.300345   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:47.300501   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:47.300623   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:47:47.300751   85077 main.go:141] libmachine: Using SSH client type: native
	I0729 18:47:47.300950   85077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:47:47.300967   85077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-903256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-903256/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-903256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:47:47.415526   85077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:47:47.415559   85077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:47:47.415597   85077 buildroot.go:174] setting up certificates
	I0729 18:47:47.415605   85077 provision.go:84] configureAuth start
	I0729 18:47:47.415614   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetMachineName
	I0729 18:47:47.415879   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetIP
	I0729 18:47:47.418580   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:47.418944   85077 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:47:41 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:47.418968   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:47.419100   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:47:47.420914   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:47.421214   85077 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:47:41 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:47.421239   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:47.421383   85077 provision.go:143] copyHostCerts
	I0729 18:47:47.421429   85077 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:47:47.421440   85077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:47:47.421519   85077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:47:47.421628   85077 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:47:47.421640   85077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:47:47.421678   85077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:47:47.421758   85077 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:47:47.421768   85077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:47:47.421810   85077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:47:47.421961   85077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.newest-cni-903256 san=[127.0.0.1 192.168.50.148 localhost minikube newest-cni-903256]
	I0729 18:47:47.841453   85077 provision.go:177] copyRemoteCerts
	I0729 18:47:47.841517   85077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:47:47.841542   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:47:47.844157   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:47.844445   85077 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:47:41 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:47.844474   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:47.844634   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:47:47.844942   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:47.845160   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:47:47.845310   85077 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa Username:docker}
	I0729 18:47:47.928253   85077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:47:47.952100   85077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 18:47:47.975076   85077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:47:47.998956   85077 provision.go:87] duration metric: took 583.337879ms to configureAuth
	I0729 18:47:47.998985   85077 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:47:47.999205   85077 config.go:182] Loaded profile config "newest-cni-903256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:47:47.999303   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:47:48.001942   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:48.002260   85077 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:47:41 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:48.002291   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:48.002437   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:47:48.002675   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:48.002902   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:48.003088   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:47:48.003287   85077 main.go:141] libmachine: Using SSH client type: native
	I0729 18:47:48.003471   85077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:47:48.003489   85077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:47:48.265634   85077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:47:48.265656   85077 machine.go:97] duration metric: took 1.203101048s to provisionDockerMachine
	I0729 18:47:48.265668   85077 start.go:293] postStartSetup for "newest-cni-903256" (driver="kvm2")
	I0729 18:47:48.265679   85077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:47:48.265694   85077 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:47:48.266009   85077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:47:48.266039   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:47:48.268491   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:48.268865   85077 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:47:41 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:48.268889   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:48.269053   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:47:48.269272   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:48.269466   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:47:48.269624   85077 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa Username:docker}
	I0729 18:47:48.353100   85077 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:47:48.357274   85077 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:47:48.357295   85077 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:47:48.357357   85077 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:47:48.357450   85077 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:47:48.357555   85077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:47:48.367043   85077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:47:48.390716   85077 start.go:296] duration metric: took 125.036768ms for postStartSetup
	I0729 18:47:48.390757   85077 fix.go:56] duration metric: took 19.262212007s for fixHost
	I0729 18:47:48.390782   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:47:48.393147   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:48.393458   85077 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:47:41 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:48.393486   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:48.393601   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:47:48.393793   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:48.393970   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:48.394109   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:47:48.394282   85077 main.go:141] libmachine: Using SSH client type: native
	I0729 18:47:48.394477   85077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:47:48.394490   85077 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:47:48.498734   85077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722278868.459456670
	
	I0729 18:47:48.498753   85077 fix.go:216] guest clock: 1722278868.459456670
	I0729 18:47:48.498759   85077 fix.go:229] Guest: 2024-07-29 18:47:48.45945667 +0000 UTC Remote: 2024-07-29 18:47:48.390762639 +0000 UTC m=+19.412844856 (delta=68.694031ms)
	I0729 18:47:48.498793   85077 fix.go:200] guest clock delta is within tolerance: 68.694031ms
	I0729 18:47:48.498797   85077 start.go:83] releasing machines lock for "newest-cni-903256", held for 19.37026198s
	I0729 18:47:48.498814   85077 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:47:48.499062   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetIP
	I0729 18:47:48.501559   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:48.501876   85077 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:47:41 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:48.501913   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:48.502072   85077 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:47:48.502598   85077 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:47:48.502794   85077 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:47:48.502850   85077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:47:48.502902   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:47:48.503026   85077 ssh_runner.go:195] Run: cat /version.json
	I0729 18:47:48.503051   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:47:48.505653   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:48.505797   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:48.505999   85077 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:47:41 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:48.506022   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:48.506063   85077 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:47:41 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:48.506078   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:48.506242   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:47:48.506405   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:47:48.506412   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:48.506572   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:48.506586   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:47:48.506728   85077 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa Username:docker}
	I0729 18:47:48.506748   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:47:48.507010   85077 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa Username:docker}
	I0729 18:47:48.587479   85077 ssh_runner.go:195] Run: systemctl --version
	I0729 18:47:48.612382   85077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:47:48.760057   85077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:47:48.766015   85077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:47:48.766073   85077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:47:48.782776   85077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:47:48.782800   85077 start.go:495] detecting cgroup driver to use...
	I0729 18:47:48.782852   85077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:47:48.801581   85077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:47:48.815744   85077 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:47:48.815796   85077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:47:48.829783   85077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:47:48.843459   85077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:47:48.953087   85077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:47:49.088538   85077 docker.go:233] disabling docker service ...
	I0729 18:47:49.088626   85077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:47:49.102808   85077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:47:49.115428   85077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:47:49.250546   85077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:47:49.369507   85077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:47:49.382973   85077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:47:49.401880   85077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 18:47:49.401942   85077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:47:49.412946   85077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:47:49.413010   85077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:47:49.423932   85077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:47:49.434082   85077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:47:49.444384   85077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:47:49.454723   85077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:47:49.464885   85077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:47:49.482085   85077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:47:49.491887   85077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:47:49.500815   85077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:47:49.500866   85077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:47:49.514018   85077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:47:49.523149   85077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:47:49.637081   85077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:47:49.776897   85077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:47:49.776984   85077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:47:49.782093   85077 start.go:563] Will wait 60s for crictl version
	I0729 18:47:49.782153   85077 ssh_runner.go:195] Run: which crictl
	I0729 18:47:49.785971   85077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:47:49.827880   85077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:47:49.827954   85077 ssh_runner.go:195] Run: crio --version
	I0729 18:47:49.857719   85077 ssh_runner.go:195] Run: crio --version
	I0729 18:47:49.887710   85077 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 18:47:49.889014   85077 main.go:141] libmachine: (newest-cni-903256) Calling .GetIP
	I0729 18:47:49.891472   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:49.891831   85077 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:47:41 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:49.891859   85077 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:49.892108   85077 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 18:47:49.896364   85077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:47:49.910848   85077 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0729 18:47:49.912250   85077 kubeadm.go:883] updating cluster {Name:newest-cni-903256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-903256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:47:49.912399   85077 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 18:47:49.912480   85077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:47:49.952900   85077 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 18:47:49.952975   85077 ssh_runner.go:195] Run: which lz4
	I0729 18:47:49.957341   85077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:47:49.961514   85077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:47:49.961549   85077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0729 18:47:51.323208   85077 crio.go:462] duration metric: took 1.365895088s to copy over tarball
	I0729 18:47:51.323270   85077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:47:53.301495   85077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.978204923s)
	I0729 18:47:53.301519   85077 crio.go:469] duration metric: took 1.97828784s to extract the tarball
	I0729 18:47:53.301526   85077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:47:53.337339   85077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:47:53.377801   85077 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:47:53.377821   85077 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:47:53.377828   85077 kubeadm.go:934] updating node { 192.168.50.148 8443 v1.31.0-beta.0 crio true true} ...
	I0729 18:47:53.377918   85077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-903256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-903256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:47:53.377981   85077 ssh_runner.go:195] Run: crio config
	I0729 18:47:53.422188   85077 cni.go:84] Creating CNI manager for ""
	I0729 18:47:53.422205   85077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:47:53.422214   85077 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0729 18:47:53.422235   85077 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.148 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-903256 NodeName:newest-cni-903256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.50.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:47:53.422353   85077 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.148
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-903256"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:47:53.422435   85077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 18:47:53.432307   85077 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:47:53.432364   85077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:47:53.441460   85077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0729 18:47:53.457384   85077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 18:47:53.473167   85077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0729 18:47:53.489192   85077 ssh_runner.go:195] Run: grep 192.168.50.148	control-plane.minikube.internal$ /etc/hosts
	I0729 18:47:53.492856   85077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:47:53.504183   85077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:47:53.630906   85077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:47:53.648639   85077 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256 for IP: 192.168.50.148
	I0729 18:47:53.648670   85077 certs.go:194] generating shared ca certs ...
	I0729 18:47:53.648689   85077 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:47:53.648863   85077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:47:53.648922   85077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:47:53.648936   85077 certs.go:256] generating profile certs ...
	I0729 18:47:53.649043   85077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/client.key
	I0729 18:47:53.649149   85077 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.key.fd2e148c
	I0729 18:47:53.649207   85077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.key
	I0729 18:47:53.649360   85077 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:47:53.649398   85077 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:47:53.649411   85077 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:47:53.649443   85077 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:47:53.649474   85077 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:47:53.649503   85077 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:47:53.649554   85077 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:47:53.650448   85077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:47:53.691253   85077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:47:53.722885   85077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:47:53.748615   85077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:47:53.779849   85077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 18:47:53.810462   85077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:47:53.837951   85077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:47:53.861296   85077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 18:47:53.884810   85077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:47:53.908453   85077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:47:53.931677   85077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:47:53.954344   85077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:47:53.970423   85077 ssh_runner.go:195] Run: openssl version
	I0729 18:47:53.975953   85077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:47:53.986075   85077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:47:53.990185   85077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:47:53.990232   85077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:47:53.995763   85077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:47:54.005965   85077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:47:54.016092   85077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:47:54.020412   85077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:47:54.020456   85077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:47:54.026213   85077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:47:54.036652   85077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:47:54.047153   85077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:47:54.051430   85077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:47:54.051473   85077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:47:54.057138   85077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:47:54.067332   85077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:47:54.071689   85077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:47:54.077545   85077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:47:54.083072   85077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:47:54.088626   85077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:47:54.094184   85077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:47:54.099629   85077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:47:54.105028   85077 kubeadm.go:392] StartCluster: {Name:newest-cni-903256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-903256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartH
ostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:47:54.105102   85077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:47:54.105141   85077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:47:54.143471   85077 cri.go:89] found id: ""
	I0729 18:47:54.143526   85077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:47:54.153681   85077 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:47:54.153699   85077 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:47:54.153744   85077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:47:54.163027   85077 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:47:54.163524   85077 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-903256" does not appear in /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:47:54.163763   85077 kubeconfig.go:62] /home/jenkins/minikube-integration/19345-11206/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-903256" cluster setting kubeconfig missing "newest-cni-903256" context setting]
	I0729 18:47:54.164221   85077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:47:54.165358   85077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:47:54.174404   85077 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.148
	I0729 18:47:54.174432   85077 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:47:54.174444   85077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:47:54.174495   85077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:47:54.218166   85077 cri.go:89] found id: ""
	I0729 18:47:54.218260   85077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:47:54.235614   85077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:47:54.244976   85077 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:47:54.244993   85077 kubeadm.go:157] found existing configuration files:
	
	I0729 18:47:54.245028   85077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:47:54.253794   85077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:47:54.253840   85077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:47:54.263361   85077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:47:54.271889   85077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:47:54.271939   85077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:47:54.283699   85077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:47:54.300377   85077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:47:54.300422   85077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:47:54.313047   85077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:47:54.323219   85077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:47:54.323263   85077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:47:54.332425   85077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:47:54.341757   85077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:47:54.455674   85077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:47:55.170615   85077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:47:55.392044   85077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:47:55.460566   85077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:47:55.545196   85077 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:47:55.545272   85077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:47:56.045339   85077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:47:56.546165   85077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:47:57.046080   85077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:47:57.545488   85077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:47:57.563897   85077 api_server.go:72] duration metric: took 2.018699155s to wait for apiserver process to appear ...
	I0729 18:47:57.563932   85077 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:47:57.563956   85077 api_server.go:253] Checking apiserver healthz at https://192.168.50.148:8443/healthz ...
	I0729 18:47:57.564449   85077 api_server.go:269] stopped: https://192.168.50.148:8443/healthz: Get "https://192.168.50.148:8443/healthz": dial tcp 192.168.50.148:8443: connect: connection refused
	I0729 18:47:58.064178   85077 api_server.go:253] Checking apiserver healthz at https://192.168.50.148:8443/healthz ...
	I0729 18:48:03.065365   85077 api_server.go:269] stopped: https://192.168.50.148:8443/healthz: Get "https://192.168.50.148:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 18:48:03.065447   85077 api_server.go:253] Checking apiserver healthz at https://192.168.50.148:8443/healthz ...
	I0729 18:48:08.066450   85077 api_server.go:269] stopped: https://192.168.50.148:8443/healthz: Get "https://192.168.50.148:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 18:48:08.066495   85077 api_server.go:253] Checking apiserver healthz at https://192.168.50.148:8443/healthz ...
	I0729 18:48:13.067571   85077 api_server.go:269] stopped: https://192.168.50.148:8443/healthz: Get "https://192.168.50.148:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 18:48:13.067618   85077 api_server.go:253] Checking apiserver healthz at https://192.168.50.148:8443/healthz ...
	I0729 18:48:17.940792   85077 api_server.go:269] stopped: https://192.168.50.148:8443/healthz: Get "https://192.168.50.148:8443/healthz": read tcp 192.168.50.1:60666->192.168.50.148:8443: read: connection reset by peer
	I0729 18:48:17.940842   85077 api_server.go:253] Checking apiserver healthz at https://192.168.50.148:8443/healthz ...
	I0729 18:48:17.941389   85077 api_server.go:269] stopped: https://192.168.50.148:8443/healthz: Get "https://192.168.50.148:8443/healthz": dial tcp 192.168.50.148:8443: connect: connection refused
	I0729 18:48:18.064637   85077 api_server.go:253] Checking apiserver healthz at https://192.168.50.148:8443/healthz ...
	I0729 18:48:18.065221   85077 api_server.go:269] stopped: https://192.168.50.148:8443/healthz: Get "https://192.168.50.148:8443/healthz": dial tcp 192.168.50.148:8443: connect: connection refused
	I0729 18:48:18.565011   85077 api_server.go:253] Checking apiserver healthz at https://192.168.50.148:8443/healthz ...
	I0729 18:48:18.565638   85077 api_server.go:269] stopped: https://192.168.50.148:8443/healthz: Get "https://192.168.50.148:8443/healthz": dial tcp 192.168.50.148:8443: connect: connection refused
	
	
	==> CRI-O <==
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.888751059Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278901888731516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6adcc436-5c31-4cd6-88c2-974335893db2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.889673895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1938ad92-d9f9-4f3b-9b40-9df179cec0ba name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.889741493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1938ad92-d9f9-4f3b-9b40-9df179cec0ba name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.890035575Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481,PodSandboxId:2cf16ca38be5f93e11353858e2145e82ad7f347fb110214f32b29b49abec9064,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277664606333649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2264d30-60dc-41f9-9b84-3b073031cf1b,},Annotations:map[string]string{io.kubernetes.container.hash: 50277011,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703935071efbcce9eda793a20cfa2eb88e4bf49ea400ba3282cfc8eb25fa4881,PodSandboxId:c66545b1536588610a9c3b00d6383eb0caf5ad3456d5ff739bb3fc889713c494,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722277642620412552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16c2fbe7-3235-4c35-b89d-d36c39f5e8e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6562b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b,PodSandboxId:de6ef185d4530830585e64f93c85fac72c9f067ef410fa2d1164d0d28291b083,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277641405583173,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mk6mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e005b1f9-cc7a-45aa-915e-85a461ebc814,},Annotations:map[string]string{io.kubernetes.container.hash: ecd1b366,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9,PodSandboxId:afc17abc1c9144526ad042ee737b7273b4316a55f78e4cccbae1fd4f5bcb0937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277633728301138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgdm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57a99bb3-9e
63-47dd-a958-5be7f3c0a9c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7d6ad1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b,PodSandboxId:2cf16ca38be5f93e11353858e2145e82ad7f347fb110214f32b29b49abec9064,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722277633652711442,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2264d30-60dc-41f9-9b84-
3b073031cf1b,},Annotations:map[string]string{io.kubernetes.container.hash: 50277011,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a,PodSandboxId:68cd61bf2e8391ef03eca0645fde97f05f40b36a85956efa3b289d50e213b255,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277628915439843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7fe6a09729d5e8dbafc14f0bd53ac8,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7fe8cd16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4,PodSandboxId:c1b20eb7e651a8283f9648b84f5c31e2dbf00a6ad1dc88868562196ea70f49c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277628904473069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54c6f6d9e5e180f29a0eb48ba166ce41,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 531499dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc,PodSandboxId:82547cb3c923e9c5881111ce82af9322ad9355c946b36cea567f286568a5996f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277628855580748,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6946d6da2d8baa3ff93ee0849b60
c03,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd,PodSandboxId:286ad36c4b4b8168d515c0001413b5e02c2f8919968e7f7dfa27c328b742da38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277628813397299,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81f633ff1c79be9b30276a7840ab3b
3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1938ad92-d9f9-4f3b-9b40-9df179cec0ba name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.928712226Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e71182be-f83a-4f3a-aeeb-10ac91a4aa8b name=/runtime.v1.RuntimeService/Version
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.928801557Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e71182be-f83a-4f3a-aeeb-10ac91a4aa8b name=/runtime.v1.RuntimeService/Version
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.929706593Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc571316-1ec3-4984-84c8-0a96c301994a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.930172974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278901930152200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc571316-1ec3-4984-84c8-0a96c301994a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.930730212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f2e853d-effa-4974-9ff0-1f1c082c8751 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.930802178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f2e853d-effa-4974-9ff0-1f1c082c8751 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.931042595Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481,PodSandboxId:2cf16ca38be5f93e11353858e2145e82ad7f347fb110214f32b29b49abec9064,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277664606333649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2264d30-60dc-41f9-9b84-3b073031cf1b,},Annotations:map[string]string{io.kubernetes.container.hash: 50277011,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703935071efbcce9eda793a20cfa2eb88e4bf49ea400ba3282cfc8eb25fa4881,PodSandboxId:c66545b1536588610a9c3b00d6383eb0caf5ad3456d5ff739bb3fc889713c494,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722277642620412552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16c2fbe7-3235-4c35-b89d-d36c39f5e8e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6562b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b,PodSandboxId:de6ef185d4530830585e64f93c85fac72c9f067ef410fa2d1164d0d28291b083,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277641405583173,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mk6mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e005b1f9-cc7a-45aa-915e-85a461ebc814,},Annotations:map[string]string{io.kubernetes.container.hash: ecd1b366,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9,PodSandboxId:afc17abc1c9144526ad042ee737b7273b4316a55f78e4cccbae1fd4f5bcb0937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277633728301138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgdm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57a99bb3-9e
63-47dd-a958-5be7f3c0a9c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7d6ad1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b,PodSandboxId:2cf16ca38be5f93e11353858e2145e82ad7f347fb110214f32b29b49abec9064,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722277633652711442,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2264d30-60dc-41f9-9b84-
3b073031cf1b,},Annotations:map[string]string{io.kubernetes.container.hash: 50277011,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a,PodSandboxId:68cd61bf2e8391ef03eca0645fde97f05f40b36a85956efa3b289d50e213b255,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277628915439843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7fe6a09729d5e8dbafc14f0bd53ac8,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7fe8cd16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4,PodSandboxId:c1b20eb7e651a8283f9648b84f5c31e2dbf00a6ad1dc88868562196ea70f49c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277628904473069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54c6f6d9e5e180f29a0eb48ba166ce41,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 531499dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc,PodSandboxId:82547cb3c923e9c5881111ce82af9322ad9355c946b36cea567f286568a5996f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277628855580748,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6946d6da2d8baa3ff93ee0849b60
c03,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd,PodSandboxId:286ad36c4b4b8168d515c0001413b5e02c2f8919968e7f7dfa27c328b742da38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277628813397299,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81f633ff1c79be9b30276a7840ab3b
3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f2e853d-effa-4974-9ff0-1f1c082c8751 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.968915739Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a53fe7f-1686-4c52-84df-28cf9b989616 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.969005174Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a53fe7f-1686-4c52-84df-28cf9b989616 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.970734225Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce4cedf3-8df9-4de7-a025-321074cd2658 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.971269893Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278901971234050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce4cedf3-8df9-4de7-a025-321074cd2658 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.972043944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=175c5586-1d4a-4f89-90ca-074125de6ef1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.972098168Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=175c5586-1d4a-4f89-90ca-074125de6ef1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:21 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:21.972300898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481,PodSandboxId:2cf16ca38be5f93e11353858e2145e82ad7f347fb110214f32b29b49abec9064,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277664606333649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2264d30-60dc-41f9-9b84-3b073031cf1b,},Annotations:map[string]string{io.kubernetes.container.hash: 50277011,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703935071efbcce9eda793a20cfa2eb88e4bf49ea400ba3282cfc8eb25fa4881,PodSandboxId:c66545b1536588610a9c3b00d6383eb0caf5ad3456d5ff739bb3fc889713c494,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722277642620412552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16c2fbe7-3235-4c35-b89d-d36c39f5e8e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6562b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b,PodSandboxId:de6ef185d4530830585e64f93c85fac72c9f067ef410fa2d1164d0d28291b083,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277641405583173,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mk6mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e005b1f9-cc7a-45aa-915e-85a461ebc814,},Annotations:map[string]string{io.kubernetes.container.hash: ecd1b366,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9,PodSandboxId:afc17abc1c9144526ad042ee737b7273b4316a55f78e4cccbae1fd4f5bcb0937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277633728301138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgdm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57a99bb3-9e
63-47dd-a958-5be7f3c0a9c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7d6ad1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b,PodSandboxId:2cf16ca38be5f93e11353858e2145e82ad7f347fb110214f32b29b49abec9064,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722277633652711442,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2264d30-60dc-41f9-9b84-
3b073031cf1b,},Annotations:map[string]string{io.kubernetes.container.hash: 50277011,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a,PodSandboxId:68cd61bf2e8391ef03eca0645fde97f05f40b36a85956efa3b289d50e213b255,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277628915439843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7fe6a09729d5e8dbafc14f0bd53ac8,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7fe8cd16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4,PodSandboxId:c1b20eb7e651a8283f9648b84f5c31e2dbf00a6ad1dc88868562196ea70f49c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277628904473069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54c6f6d9e5e180f29a0eb48ba166ce41,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 531499dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc,PodSandboxId:82547cb3c923e9c5881111ce82af9322ad9355c946b36cea567f286568a5996f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277628855580748,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6946d6da2d8baa3ff93ee0849b60
c03,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd,PodSandboxId:286ad36c4b4b8168d515c0001413b5e02c2f8919968e7f7dfa27c328b742da38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277628813397299,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81f633ff1c79be9b30276a7840ab3b
3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=175c5586-1d4a-4f89-90ca-074125de6ef1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:22 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:22.004848359Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f8ed838-470d-4cd0-a359-32dbe0e7b329 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:48:22 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:22.004973300Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f8ed838-470d-4cd0-a359-32dbe0e7b329 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:48:22 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:22.006583212Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5429b94b-7016-4069-87e3-92741014a1b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:48:22 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:22.007128841Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278902007104771,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5429b94b-7016-4069-87e3-92741014a1b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:48:22 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:22.007738027Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93c73728-2bfd-42fd-ad1e-192c4fe27983 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:22 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:22.007793421Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93c73728-2bfd-42fd-ad1e-192c4fe27983 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:22 default-k8s-diff-port-502055 crio[731]: time="2024-07-29 18:48:22.008171651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481,PodSandboxId:2cf16ca38be5f93e11353858e2145e82ad7f347fb110214f32b29b49abec9064,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277664606333649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2264d30-60dc-41f9-9b84-3b073031cf1b,},Annotations:map[string]string{io.kubernetes.container.hash: 50277011,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703935071efbcce9eda793a20cfa2eb88e4bf49ea400ba3282cfc8eb25fa4881,PodSandboxId:c66545b1536588610a9c3b00d6383eb0caf5ad3456d5ff739bb3fc889713c494,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722277642620412552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16c2fbe7-3235-4c35-b89d-d36c39f5e8e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6562b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b,PodSandboxId:de6ef185d4530830585e64f93c85fac72c9f067ef410fa2d1164d0d28291b083,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277641405583173,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mk6mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e005b1f9-cc7a-45aa-915e-85a461ebc814,},Annotations:map[string]string{io.kubernetes.container.hash: ecd1b366,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9,PodSandboxId:afc17abc1c9144526ad042ee737b7273b4316a55f78e4cccbae1fd4f5bcb0937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277633728301138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgdm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57a99bb3-9e
63-47dd-a958-5be7f3c0a9c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7d6ad1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b,PodSandboxId:2cf16ca38be5f93e11353858e2145e82ad7f347fb110214f32b29b49abec9064,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722277633652711442,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2264d30-60dc-41f9-9b84-
3b073031cf1b,},Annotations:map[string]string{io.kubernetes.container.hash: 50277011,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a,PodSandboxId:68cd61bf2e8391ef03eca0645fde97f05f40b36a85956efa3b289d50e213b255,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277628915439843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7fe6a09729d5e8dbafc14f0bd53ac8,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7fe8cd16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4,PodSandboxId:c1b20eb7e651a8283f9648b84f5c31e2dbf00a6ad1dc88868562196ea70f49c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277628904473069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54c6f6d9e5e180f29a0eb48ba166ce41,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 531499dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc,PodSandboxId:82547cb3c923e9c5881111ce82af9322ad9355c946b36cea567f286568a5996f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277628855580748,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6946d6da2d8baa3ff93ee0849b60
c03,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd,PodSandboxId:286ad36c4b4b8168d515c0001413b5e02c2f8919968e7f7dfa27c328b742da38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277628813397299,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-502055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81f633ff1c79be9b30276a7840ab3b
3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93c73728-2bfd-42fd-ad1e-192c4fe27983 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9d54b3da125ce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   2cf16ca38be5f       storage-provisioner
	703935071efbc       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   c66545b153658       busybox
	2b2cc4240a68e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      21 minutes ago      Running             coredns                   1                   de6ef185d4530       coredns-7db6d8ff4d-mk6mx
	ec56fb749b981       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      21 minutes ago      Running             kube-proxy                1                   afc17abc1c914       kube-proxy-cgdm8
	482ca3200e17e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   2cf16ca38be5f       storage-provisioner
	fec93784adcb5       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      21 minutes ago      Running             etcd                      1                   68cd61bf2e839       etcd-default-k8s-diff-port-502055
	630d0a93e04a3       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      21 minutes ago      Running             kube-apiserver            1                   c1b20eb7e651a       kube-apiserver-default-k8s-diff-port-502055
	92b99f54da092       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      21 minutes ago      Running             kube-controller-manager   1                   82547cb3c923e       kube-controller-manager-default-k8s-diff-port-502055
	991e6d9556b66       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      21 minutes ago      Running             kube-scheduler            1                   286ad36c4b4b8       kube-scheduler-default-k8s-diff-port-502055
	
	
	==> coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54419 - 25158 "HINFO IN 4898418047693939155.180201105836920316. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.044292202s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-502055
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-502055
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=default-k8s-diff-port-502055
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T18_19_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:19:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-502055
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:48:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:48:07 +0000   Mon, 29 Jul 2024 18:18:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:48:07 +0000   Mon, 29 Jul 2024 18:18:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:48:07 +0000   Mon, 29 Jul 2024 18:18:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:48:07 +0000   Mon, 29 Jul 2024 18:27:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.244
	  Hostname:    default-k8s-diff-port-502055
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4343992a877646ba8ffa1e0f4210b5e8
	  System UUID:                4343992a-8776-46ba-8ffa-1e0f4210b5e8
	  Boot ID:                    89fe04ef-dd54-4da9-b9fc-d86630fc2277
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-mk6mx                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-502055                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-502055             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-502055    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-cgdm8                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-502055             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-bm8tm                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 29m                kube-proxy       
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-502055 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-502055 event: Registered Node default-k8s-diff-port-502055 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-502055 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-502055 event: Registered Node default-k8s-diff-port-502055 in Controller
	
	
	==> dmesg <==
	[Jul29 18:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051858] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042959] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.978612] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.573898] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.576945] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul29 18:27] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.058469] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077397] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.208652] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.157971] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.289097] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.460508] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.070122] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.435342] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +5.632464] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.938032] systemd-fstab-generator[1549]: Ignoring "noauto" option for root device
	[  +3.740627] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.860335] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] <==
	{"level":"info","ts":"2024-07-29T18:27:11.153172Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T18:27:11.153183Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.244:2379"}
	{"level":"info","ts":"2024-07-29T18:27:27.176245Z","caller":"traceutil/trace.go:171","msg":"trace[1788071170] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"143.287049ms","start":"2024-07-29T18:27:27.032934Z","end":"2024-07-29T18:27:27.176221Z","steps":["trace[1788071170] 'process raft request'  (duration: 143.178768ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:27:27.478265Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.37196ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11956653488811645318 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-502055\" mod_revision:633 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-502055\" value_size:6669 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-502055\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T18:27:27.479153Z","caller":"traceutil/trace.go:171","msg":"trace[2115950415] transaction","detail":"{read_only:false; response_revision:634; number_of_response:1; }","duration":"284.905872ms","start":"2024-07-29T18:27:27.194227Z","end":"2024-07-29T18:27:27.479133Z","steps":["trace[2115950415] 'process raft request'  (duration: 126.238092ms)","trace[2115950415] 'compare'  (duration: 157.271443ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T18:27:27.620341Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.920766ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11956653488811645319 > lease_revoke:<id:25ee90ffb78ee945>","response":"size:28"}
	{"level":"info","ts":"2024-07-29T18:27:44.61149Z","caller":"traceutil/trace.go:171","msg":"trace[353620629] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"391.789488ms","start":"2024-07-29T18:27:44.219685Z","end":"2024-07-29T18:27:44.611474Z","steps":["trace[353620629] 'process raft request'  (duration: 391.611367ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:27:44.611624Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:27:44.219673Z","time spent":"391.882765ms","remote":"127.0.0.1:36266","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":833,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-569cc877fc-bm8tm.17e6c2638abd5b65\" mod_revision:605 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-bm8tm.17e6c2638abd5b65\" value_size:738 lease:2733281451956869286 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-bm8tm.17e6c2638abd5b65\" > >"}
	{"level":"info","ts":"2024-07-29T18:27:44.611861Z","caller":"traceutil/trace.go:171","msg":"trace[1350280374] linearizableReadLoop","detail":"{readStateIndex:694; appliedIndex:694; }","duration":"391.868314ms","start":"2024-07-29T18:27:44.21998Z","end":"2024-07-29T18:27:44.611849Z","steps":["trace[1350280374] 'read index received'  (duration: 391.86542ms)","trace[1350280374] 'applied index is now lower than readState.Index'  (duration: 2.193µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T18:27:44.612053Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"392.061887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-502055\" ","response":"range_response_count:1 size:5802"}
	{"level":"info","ts":"2024-07-29T18:27:44.612093Z","caller":"traceutil/trace.go:171","msg":"trace[785010141] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-502055; range_end:; response_count:1; response_revision:646; }","duration":"392.11047ms","start":"2024-07-29T18:27:44.219976Z","end":"2024-07-29T18:27:44.612087Z","steps":["trace[785010141] 'agreement among raft nodes before linearized reading'  (duration: 391.97531ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:27:44.612114Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:27:44.219951Z","time spent":"392.157883ms","remote":"127.0.0.1:36366","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5825,"request content":"key:\"/registry/minions/default-k8s-diff-port-502055\" "}
	{"level":"info","ts":"2024-07-29T18:27:44.617191Z","caller":"traceutil/trace.go:171","msg":"trace[1394523806] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"394.592986ms","start":"2024-07-29T18:27:44.222575Z","end":"2024-07-29T18:27:44.617168Z","steps":["trace[1394523806] 'process raft request'  (duration: 394.510873ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:27:44.617356Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:27:44.22256Z","time spent":"394.732321ms","remote":"127.0.0.1:36382","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4278,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-569cc877fc-bm8tm\" mod_revision:636 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-bm8tm\" value_size:4212 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-bm8tm\" > >"}
	{"level":"info","ts":"2024-07-29T18:37:11.19113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":877}
	{"level":"info","ts":"2024-07-29T18:37:11.20747Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":877,"took":"15.701049ms","hash":1876331718,"current-db-size-bytes":2637824,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2637824,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-29T18:37:11.207566Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1876331718,"revision":877,"compact-revision":-1}
	{"level":"info","ts":"2024-07-29T18:42:11.198818Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1120}
	{"level":"info","ts":"2024-07-29T18:42:11.202801Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1120,"took":"3.605156ms","hash":3391749144,"current-db-size-bytes":2637824,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1581056,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-29T18:42:11.202919Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3391749144,"revision":1120,"compact-revision":877}
	{"level":"info","ts":"2024-07-29T18:47:11.206588Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1363}
	{"level":"info","ts":"2024-07-29T18:47:11.211554Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1363,"took":"4.461359ms","hash":1184024363,"current-db-size-bytes":2637824,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-29T18:47:11.211619Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1184024363,"revision":1363,"compact-revision":1120}
	{"level":"warn","ts":"2024-07-29T18:47:56.396849Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.545813ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T18:47:56.397104Z","caller":"traceutil/trace.go:171","msg":"trace[981856109] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; response_count:0; response_revision:1643; }","duration":"105.89887ms","start":"2024-07-29T18:47:56.291162Z","end":"2024-07-29T18:47:56.397061Z","steps":["trace[981856109] 'count revisions from in-memory index tree'  (duration: 105.498625ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:48:22 up 21 min,  0 users,  load average: 0.02, 0.07, 0.09
	Linux default-k8s-diff-port-502055 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] <==
	I0729 18:43:13.527818       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:45:13.527299       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:45:13.527646       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 18:45:13.527700       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:45:13.528934       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:45:13.529145       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 18:45:13.529227       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:47:12.530018       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:47:12.530293       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 18:47:13.530460       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:47:13.530572       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 18:47:13.530583       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:47:13.530661       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:47:13.530708       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 18:47:13.531955       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:48:13.531687       1 handler_proxy.go:93] no RequestInfo found in the context
	W0729 18:48:13.532020       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:48:13.532088       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 18:48:13.532163       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0729 18:48:13.532196       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 18:48:13.533984       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] <==
	I0729 18:42:25.921242       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:42:55.414431       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:42:55.928978       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 18:43:22.236842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="1.516203ms"
	E0729 18:43:25.420099       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:43:25.938159       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 18:43:36.229005       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="309.411µs"
	E0729 18:43:55.425566       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:43:55.945079       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:44:25.431666       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:44:25.958404       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:44:55.436848       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:44:55.966556       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:45:25.442100       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:45:25.974210       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:45:55.447788       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:45:55.981468       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:46:25.453453       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:46:25.990707       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:46:55.459508       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:46:56.001202       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:47:25.463993       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:47:26.008526       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:47:55.468790       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:47:56.017386       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] <==
	I0729 18:27:13.953921       1 server_linux.go:69] "Using iptables proxy"
	I0729 18:27:13.978992       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.244"]
	I0729 18:27:14.035464       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:27:14.035653       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:27:14.035697       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:27:14.038507       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:27:14.038725       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:27:14.038930       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:27:14.040216       1 config.go:192] "Starting service config controller"
	I0729 18:27:14.040263       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:27:14.040300       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:27:14.040317       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:27:14.040785       1 config.go:319] "Starting node config controller"
	I0729 18:27:14.040938       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:27:14.140815       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:27:14.140990       1 shared_informer.go:320] Caches are synced for node config
	I0729 18:27:14.141031       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] <==
	I0729 18:27:09.715655       1 serving.go:380] Generated self-signed cert in-memory
	W0729 18:27:12.463128       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 18:27:12.463221       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 18:27:12.463233       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 18:27:12.463242       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 18:27:12.555549       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 18:27:12.555642       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:27:12.557333       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 18:27:12.559027       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 18:27:12.564703       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 18:27:12.559047       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 18:27:12.664961       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 18:46:08 default-k8s-diff-port-502055 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:46:08 default-k8s-diff-port-502055 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:46:22 default-k8s-diff-port-502055 kubelet[943]: E0729 18:46:22.212260     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:46:35 default-k8s-diff-port-502055 kubelet[943]: E0729 18:46:35.212961     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:46:50 default-k8s-diff-port-502055 kubelet[943]: E0729 18:46:50.212046     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:47:04 default-k8s-diff-port-502055 kubelet[943]: E0729 18:47:04.214771     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:47:08 default-k8s-diff-port-502055 kubelet[943]: E0729 18:47:08.242384     943 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:47:08 default-k8s-diff-port-502055 kubelet[943]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:47:08 default-k8s-diff-port-502055 kubelet[943]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:47:08 default-k8s-diff-port-502055 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:47:08 default-k8s-diff-port-502055 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:47:16 default-k8s-diff-port-502055 kubelet[943]: E0729 18:47:16.213018     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:47:28 default-k8s-diff-port-502055 kubelet[943]: E0729 18:47:28.212411     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:47:42 default-k8s-diff-port-502055 kubelet[943]: E0729 18:47:42.212479     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:47:54 default-k8s-diff-port-502055 kubelet[943]: E0729 18:47:54.213048     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:48:05 default-k8s-diff-port-502055 kubelet[943]: E0729 18:48:05.212323     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	Jul 29 18:48:08 default-k8s-diff-port-502055 kubelet[943]: E0729 18:48:08.241835     943 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:48:08 default-k8s-diff-port-502055 kubelet[943]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:48:08 default-k8s-diff-port-502055 kubelet[943]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:48:08 default-k8s-diff-port-502055 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:48:08 default-k8s-diff-port-502055 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:48:19 default-k8s-diff-port-502055 kubelet[943]: E0729 18:48:19.227835     943 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 18:48:19 default-k8s-diff-port-502055 kubelet[943]: E0729 18:48:19.228298     943 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 18:48:19 default-k8s-diff-port-502055 kubelet[943]: E0729 18:48:19.228665     943 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k2nc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-bm8tm_kube-system(6891d9ee-82db-4307-adf1-ff60d35506bc): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 29 18:48:19 default-k8s-diff-port-502055 kubelet[943]: E0729 18:48:19.229090     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-bm8tm" podUID="6891d9ee-82db-4307-adf1-ff60d35506bc"
	
	
	==> storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] <==
	I0729 18:27:13.812162       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 18:27:43.820179       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] <==
	I0729 18:27:44.733703       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 18:27:44.748272       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 18:27:44.748396       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 18:28:02.151183       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 18:28:02.151683       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"297350fb-9b3e-4dd8-b768-ae51a278f99d", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-502055_5b47818b-c1c3-4ab1-b16d-cadac8cf42d0 became leader
	I0729 18:28:02.151781       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-502055_5b47818b-c1c3-4ab1-b16d-cadac8cf42d0!
	I0729 18:28:02.252859       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-502055_5b47818b-c1c3-4ab1-b16d-cadac8cf42d0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-502055 -n default-k8s-diff-port-502055
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-502055 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-bm8tm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-502055 describe pod metrics-server-569cc877fc-bm8tm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-502055 describe pod metrics-server-569cc877fc-bm8tm: exit status 1 (60.735726ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-bm8tm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-502055 describe pod metrics-server-569cc877fc-bm8tm: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (458.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (375.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-409322 -n embed-certs-409322
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 18:47:28.196353848 +0000 UTC m=+6704.863722138
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-409322 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-409322 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (8.263µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-409322 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-409322 -n embed-certs-409322
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-409322 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-409322 logs -n 25: (1.277844877s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-729010 sudo crio                             | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-729010                                       | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	| delete  | -p                                                     | disable-driver-mounts-603863 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | disable-driver-mounts-603863                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:19 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-888056             | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-888056                                   | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-409322            | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-502055  | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC |                     |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-386663        | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-888056                  | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-888056 --memory=2200                     | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:33 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-409322                 | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-502055       | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:31 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386663             | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	| start   | -p newest-cni-903256 --memory=2200 --alsologtostderr   | newest-cni-903256            | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:47 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-903256             | newest-cni-903256            | jenkins | v1.33.1 | 29 Jul 24 18:47 UTC | 29 Jul 24 18:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-903256                                   | newest-cni-903256            | jenkins | v1.33.1 | 29 Jul 24 18:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-888056                                   | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:47 UTC | 29 Jul 24 18:47 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:46:30
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:46:30.068748   84251 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:46:30.068839   84251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:46:30.068843   84251 out.go:304] Setting ErrFile to fd 2...
	I0729 18:46:30.068847   84251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:46:30.069023   84251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:46:30.069574   84251 out.go:298] Setting JSON to false
	I0729 18:46:30.070521   84251 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":8942,"bootTime":1722269848,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:46:30.070576   84251 start.go:139] virtualization: kvm guest
	I0729 18:46:30.072814   84251 out.go:177] * [newest-cni-903256] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:46:30.074119   84251 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 18:46:30.074124   84251 notify.go:220] Checking for updates...
	I0729 18:46:30.075449   84251 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:46:30.076849   84251 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:46:30.078050   84251 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:46:30.079534   84251 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:46:30.080830   84251 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:46:30.082628   84251 config.go:182] Loaded profile config "default-k8s-diff-port-502055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:46:30.082738   84251 config.go:182] Loaded profile config "embed-certs-409322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:46:30.082843   84251 config.go:182] Loaded profile config "no-preload-888056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:46:30.082933   84251 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:46:30.120413   84251 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 18:46:30.121714   84251 start.go:297] selected driver: kvm2
	I0729 18:46:30.121734   84251 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:46:30.121748   84251 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:46:30.122750   84251 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:46:30.122844   84251 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:46:30.138507   84251 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:46:30.138547   84251 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 18:46:30.138572   84251 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 18:46:30.138763   84251 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 18:46:30.138813   84251 cni.go:84] Creating CNI manager for ""
	I0729 18:46:30.138825   84251 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:46:30.138835   84251 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 18:46:30.138890   84251 start.go:340] cluster config:
	{Name:newest-cni-903256 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-903256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:46:30.139003   84251 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:46:30.140640   84251 out.go:177] * Starting "newest-cni-903256" primary control-plane node in "newest-cni-903256" cluster
	I0729 18:46:30.141840   84251 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 18:46:30.141887   84251 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:46:30.141900   84251 cache.go:56] Caching tarball of preloaded images
	I0729 18:46:30.141986   84251 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:46:30.142001   84251 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 18:46:30.142091   84251 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/config.json ...
	I0729 18:46:30.142115   84251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/config.json: {Name:mk0ffbc23f36706df4690c2ad4313143e8f4dddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:30.142272   84251 start.go:360] acquireMachinesLock for newest-cni-903256: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:46:30.142324   84251 start.go:364] duration metric: took 36.267µs to acquireMachinesLock for "newest-cni-903256"
	I0729 18:46:30.142348   84251 start.go:93] Provisioning new machine with config: &{Name:newest-cni-903256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-903256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:46:30.142453   84251 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 18:46:30.144008   84251 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 18:46:30.144197   84251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:46:30.144235   84251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:46:30.158515   84251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I0729 18:46:30.158914   84251 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:46:30.159427   84251 main.go:141] libmachine: Using API Version  1
	I0729 18:46:30.159446   84251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:46:30.159750   84251 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:46:30.159935   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetMachineName
	I0729 18:46:30.160081   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:46:30.160260   84251 start.go:159] libmachine.API.Create for "newest-cni-903256" (driver="kvm2")
	I0729 18:46:30.160291   84251 client.go:168] LocalClient.Create starting
	I0729 18:46:30.160324   84251 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem
	I0729 18:46:30.160367   84251 main.go:141] libmachine: Decoding PEM data...
	I0729 18:46:30.160387   84251 main.go:141] libmachine: Parsing certificate...
	I0729 18:46:30.160462   84251 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem
	I0729 18:46:30.160486   84251 main.go:141] libmachine: Decoding PEM data...
	I0729 18:46:30.160503   84251 main.go:141] libmachine: Parsing certificate...
	I0729 18:46:30.160527   84251 main.go:141] libmachine: Running pre-create checks...
	I0729 18:46:30.160538   84251 main.go:141] libmachine: (newest-cni-903256) Calling .PreCreateCheck
	I0729 18:46:30.160867   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetConfigRaw
	I0729 18:46:30.161218   84251 main.go:141] libmachine: Creating machine...
	I0729 18:46:30.161231   84251 main.go:141] libmachine: (newest-cni-903256) Calling .Create
	I0729 18:46:30.161353   84251 main.go:141] libmachine: (newest-cni-903256) Creating KVM machine...
	I0729 18:46:30.162583   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found existing default KVM network
	I0729 18:46:30.163795   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:30.163643   84274 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ab:5b:77} reservation:<nil>}
	I0729 18:46:30.164793   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:30.164720   84274 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a87b0}
	I0729 18:46:30.164811   84251 main.go:141] libmachine: (newest-cni-903256) DBG | created network xml: 
	I0729 18:46:30.164827   84251 main.go:141] libmachine: (newest-cni-903256) DBG | <network>
	I0729 18:46:30.164845   84251 main.go:141] libmachine: (newest-cni-903256) DBG |   <name>mk-newest-cni-903256</name>
	I0729 18:46:30.164863   84251 main.go:141] libmachine: (newest-cni-903256) DBG |   <dns enable='no'/>
	I0729 18:46:30.164875   84251 main.go:141] libmachine: (newest-cni-903256) DBG |   
	I0729 18:46:30.164886   84251 main.go:141] libmachine: (newest-cni-903256) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0729 18:46:30.164900   84251 main.go:141] libmachine: (newest-cni-903256) DBG |     <dhcp>
	I0729 18:46:30.164914   84251 main.go:141] libmachine: (newest-cni-903256) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0729 18:46:30.164921   84251 main.go:141] libmachine: (newest-cni-903256) DBG |     </dhcp>
	I0729 18:46:30.164930   84251 main.go:141] libmachine: (newest-cni-903256) DBG |   </ip>
	I0729 18:46:30.164935   84251 main.go:141] libmachine: (newest-cni-903256) DBG |   
	I0729 18:46:30.164942   84251 main.go:141] libmachine: (newest-cni-903256) DBG | </network>
	I0729 18:46:30.164949   84251 main.go:141] libmachine: (newest-cni-903256) DBG | 
	I0729 18:46:30.170064   84251 main.go:141] libmachine: (newest-cni-903256) DBG | trying to create private KVM network mk-newest-cni-903256 192.168.50.0/24...
	I0729 18:46:30.242424   84251 main.go:141] libmachine: (newest-cni-903256) Setting up store path in /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256 ...
	I0729 18:46:30.242467   84251 main.go:141] libmachine: (newest-cni-903256) Building disk image from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 18:46:30.242478   84251 main.go:141] libmachine: (newest-cni-903256) DBG | private KVM network mk-newest-cni-903256 192.168.50.0/24 created
	I0729 18:46:30.242498   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:30.242335   84274 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:46:30.242673   84251 main.go:141] libmachine: (newest-cni-903256) Downloading /home/jenkins/minikube-integration/19345-11206/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 18:46:30.494753   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:30.494601   84274 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa...
	I0729 18:46:30.681600   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:30.681500   84274 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/newest-cni-903256.rawdisk...
	I0729 18:46:30.681627   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Writing magic tar header
	I0729 18:46:30.681638   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Writing SSH key tar header
	I0729 18:46:30.681650   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:30.681620   84274 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256 ...
	I0729 18:46:30.681759   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256
	I0729 18:46:30.681785   84251 main.go:141] libmachine: (newest-cni-903256) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256 (perms=drwx------)
	I0729 18:46:30.681797   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines
	I0729 18:46:30.681812   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:46:30.681820   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206
	I0729 18:46:30.681828   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:46:30.681833   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:46:30.681840   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Checking permissions on dir: /home
	I0729 18:46:30.681848   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Skipping /home - not owner
	I0729 18:46:30.681867   84251 main.go:141] libmachine: (newest-cni-903256) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:46:30.681895   84251 main.go:141] libmachine: (newest-cni-903256) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube (perms=drwxr-xr-x)
	I0729 18:46:30.681908   84251 main.go:141] libmachine: (newest-cni-903256) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206 (perms=drwxrwxr-x)
	I0729 18:46:30.681926   84251 main.go:141] libmachine: (newest-cni-903256) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:46:30.681938   84251 main.go:141] libmachine: (newest-cni-903256) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:46:30.681949   84251 main.go:141] libmachine: (newest-cni-903256) Creating domain...
	I0729 18:46:30.683156   84251 main.go:141] libmachine: (newest-cni-903256) define libvirt domain using xml: 
	I0729 18:46:30.683175   84251 main.go:141] libmachine: (newest-cni-903256) <domain type='kvm'>
	I0729 18:46:30.683183   84251 main.go:141] libmachine: (newest-cni-903256)   <name>newest-cni-903256</name>
	I0729 18:46:30.683188   84251 main.go:141] libmachine: (newest-cni-903256)   <memory unit='MiB'>2200</memory>
	I0729 18:46:30.683195   84251 main.go:141] libmachine: (newest-cni-903256)   <vcpu>2</vcpu>
	I0729 18:46:30.683201   84251 main.go:141] libmachine: (newest-cni-903256)   <features>
	I0729 18:46:30.683208   84251 main.go:141] libmachine: (newest-cni-903256)     <acpi/>
	I0729 18:46:30.683213   84251 main.go:141] libmachine: (newest-cni-903256)     <apic/>
	I0729 18:46:30.683225   84251 main.go:141] libmachine: (newest-cni-903256)     <pae/>
	I0729 18:46:30.683232   84251 main.go:141] libmachine: (newest-cni-903256)     
	I0729 18:46:30.683238   84251 main.go:141] libmachine: (newest-cni-903256)   </features>
	I0729 18:46:30.683245   84251 main.go:141] libmachine: (newest-cni-903256)   <cpu mode='host-passthrough'>
	I0729 18:46:30.683250   84251 main.go:141] libmachine: (newest-cni-903256)   
	I0729 18:46:30.683256   84251 main.go:141] libmachine: (newest-cni-903256)   </cpu>
	I0729 18:46:30.683261   84251 main.go:141] libmachine: (newest-cni-903256)   <os>
	I0729 18:46:30.683268   84251 main.go:141] libmachine: (newest-cni-903256)     <type>hvm</type>
	I0729 18:46:30.683299   84251 main.go:141] libmachine: (newest-cni-903256)     <boot dev='cdrom'/>
	I0729 18:46:30.683323   84251 main.go:141] libmachine: (newest-cni-903256)     <boot dev='hd'/>
	I0729 18:46:30.683348   84251 main.go:141] libmachine: (newest-cni-903256)     <bootmenu enable='no'/>
	I0729 18:46:30.683366   84251 main.go:141] libmachine: (newest-cni-903256)   </os>
	I0729 18:46:30.683379   84251 main.go:141] libmachine: (newest-cni-903256)   <devices>
	I0729 18:46:30.683391   84251 main.go:141] libmachine: (newest-cni-903256)     <disk type='file' device='cdrom'>
	I0729 18:46:30.683419   84251 main.go:141] libmachine: (newest-cni-903256)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/boot2docker.iso'/>
	I0729 18:46:30.683431   84251 main.go:141] libmachine: (newest-cni-903256)       <target dev='hdc' bus='scsi'/>
	I0729 18:46:30.683445   84251 main.go:141] libmachine: (newest-cni-903256)       <readonly/>
	I0729 18:46:30.683458   84251 main.go:141] libmachine: (newest-cni-903256)     </disk>
	I0729 18:46:30.683472   84251 main.go:141] libmachine: (newest-cni-903256)     <disk type='file' device='disk'>
	I0729 18:46:30.683484   84251 main.go:141] libmachine: (newest-cni-903256)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:46:30.683500   84251 main.go:141] libmachine: (newest-cni-903256)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/newest-cni-903256.rawdisk'/>
	I0729 18:46:30.683510   84251 main.go:141] libmachine: (newest-cni-903256)       <target dev='hda' bus='virtio'/>
	I0729 18:46:30.683522   84251 main.go:141] libmachine: (newest-cni-903256)     </disk>
	I0729 18:46:30.683540   84251 main.go:141] libmachine: (newest-cni-903256)     <interface type='network'>
	I0729 18:46:30.683553   84251 main.go:141] libmachine: (newest-cni-903256)       <source network='mk-newest-cni-903256'/>
	I0729 18:46:30.683564   84251 main.go:141] libmachine: (newest-cni-903256)       <model type='virtio'/>
	I0729 18:46:30.683573   84251 main.go:141] libmachine: (newest-cni-903256)     </interface>
	I0729 18:46:30.683583   84251 main.go:141] libmachine: (newest-cni-903256)     <interface type='network'>
	I0729 18:46:30.683597   84251 main.go:141] libmachine: (newest-cni-903256)       <source network='default'/>
	I0729 18:46:30.683611   84251 main.go:141] libmachine: (newest-cni-903256)       <model type='virtio'/>
	I0729 18:46:30.683623   84251 main.go:141] libmachine: (newest-cni-903256)     </interface>
	I0729 18:46:30.683631   84251 main.go:141] libmachine: (newest-cni-903256)     <serial type='pty'>
	I0729 18:46:30.683642   84251 main.go:141] libmachine: (newest-cni-903256)       <target port='0'/>
	I0729 18:46:30.683652   84251 main.go:141] libmachine: (newest-cni-903256)     </serial>
	I0729 18:46:30.683661   84251 main.go:141] libmachine: (newest-cni-903256)     <console type='pty'>
	I0729 18:46:30.683678   84251 main.go:141] libmachine: (newest-cni-903256)       <target type='serial' port='0'/>
	I0729 18:46:30.683694   84251 main.go:141] libmachine: (newest-cni-903256)     </console>
	I0729 18:46:30.683703   84251 main.go:141] libmachine: (newest-cni-903256)     <rng model='virtio'>
	I0729 18:46:30.683712   84251 main.go:141] libmachine: (newest-cni-903256)       <backend model='random'>/dev/random</backend>
	I0729 18:46:30.683720   84251 main.go:141] libmachine: (newest-cni-903256)     </rng>
	I0729 18:46:30.683725   84251 main.go:141] libmachine: (newest-cni-903256)     
	I0729 18:46:30.683733   84251 main.go:141] libmachine: (newest-cni-903256)     
	I0729 18:46:30.683744   84251 main.go:141] libmachine: (newest-cni-903256)   </devices>
	I0729 18:46:30.683752   84251 main.go:141] libmachine: (newest-cni-903256) </domain>
	I0729 18:46:30.683764   84251 main.go:141] libmachine: (newest-cni-903256) 
	I0729 18:46:30.688085   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:18:c2:da in network default
	I0729 18:46:30.688651   84251 main.go:141] libmachine: (newest-cni-903256) Ensuring networks are active...
	I0729 18:46:30.688672   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:30.689327   84251 main.go:141] libmachine: (newest-cni-903256) Ensuring network default is active
	I0729 18:46:30.689607   84251 main.go:141] libmachine: (newest-cni-903256) Ensuring network mk-newest-cni-903256 is active
	I0729 18:46:30.690045   84251 main.go:141] libmachine: (newest-cni-903256) Getting domain xml...
	I0729 18:46:30.690701   84251 main.go:141] libmachine: (newest-cni-903256) Creating domain...
	I0729 18:46:31.952214   84251 main.go:141] libmachine: (newest-cni-903256) Waiting to get IP...
	I0729 18:46:31.952988   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:31.953420   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:31.953439   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:31.953396   84274 retry.go:31] will retry after 261.751977ms: waiting for machine to come up
	I0729 18:46:32.216957   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:32.217417   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:32.217447   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:32.217369   84274 retry.go:31] will retry after 270.043866ms: waiting for machine to come up
	I0729 18:46:32.488853   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:32.489379   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:32.489412   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:32.489350   84274 retry.go:31] will retry after 335.253907ms: waiting for machine to come up
	I0729 18:46:32.825833   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:32.826289   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:32.826320   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:32.826219   84274 retry.go:31] will retry after 496.757412ms: waiting for machine to come up
	I0729 18:46:33.324528   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:33.324974   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:33.325002   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:33.324930   84274 retry.go:31] will retry after 672.944303ms: waiting for machine to come up
	I0729 18:46:34.000034   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:34.000523   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:34.000571   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:34.000490   84274 retry.go:31] will retry after 913.112646ms: waiting for machine to come up
	I0729 18:46:34.915564   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:34.915967   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:34.916005   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:34.915926   84274 retry.go:31] will retry after 766.485053ms: waiting for machine to come up
	I0729 18:46:35.684510   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:35.684979   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:35.685007   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:35.684927   84274 retry.go:31] will retry after 1.236100877s: waiting for machine to come up
	I0729 18:46:36.922270   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:36.922772   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:36.922810   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:36.922740   84274 retry.go:31] will retry after 1.142869002s: waiting for machine to come up
	I0729 18:46:38.067357   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:38.067751   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:38.067778   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:38.067704   84274 retry.go:31] will retry after 1.58112412s: waiting for machine to come up
	I0729 18:46:39.651433   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:39.651908   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:39.651945   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:39.651864   84274 retry.go:31] will retry after 2.06459354s: waiting for machine to come up
	I0729 18:46:41.717755   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:41.718270   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:41.718297   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:41.718234   84274 retry.go:31] will retry after 3.310077087s: waiting for machine to come up
	I0729 18:46:45.031346   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:45.031768   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:45.031796   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:45.031729   84274 retry.go:31] will retry after 3.683065397s: waiting for machine to come up
	I0729 18:46:48.718353   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:48.718789   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:48.718837   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:48.718722   84274 retry.go:31] will retry after 4.039703001s: waiting for machine to come up
	I0729 18:46:52.761590   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:52.762066   84251 main.go:141] libmachine: (newest-cni-903256) Found IP for machine: 192.168.50.148
	I0729 18:46:52.762098   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has current primary IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:52.762108   84251 main.go:141] libmachine: (newest-cni-903256) Reserving static IP address...
	I0729 18:46:52.762525   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find host DHCP lease matching {name: "newest-cni-903256", mac: "52:54:00:b7:b1:4e", ip: "192.168.50.148"} in network mk-newest-cni-903256
	I0729 18:46:52.841783   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Getting to WaitForSSH function...
	I0729 18:46:52.841813   84251 main.go:141] libmachine: (newest-cni-903256) Reserved static IP address: 192.168.50.148
	I0729 18:46:52.841826   84251 main.go:141] libmachine: (newest-cni-903256) Waiting for SSH to be available...
	I0729 18:46:52.845161   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:52.845611   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:52.845641   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:52.845837   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Using SSH client type: external
	I0729 18:46:52.845863   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa (-rw-------)
	I0729 18:46:52.845901   84251 main.go:141] libmachine: (newest-cni-903256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:46:52.845917   84251 main.go:141] libmachine: (newest-cni-903256) DBG | About to run SSH command:
	I0729 18:46:52.845937   84251 main.go:141] libmachine: (newest-cni-903256) DBG | exit 0
	I0729 18:46:52.978416   84251 main.go:141] libmachine: (newest-cni-903256) DBG | SSH cmd err, output: <nil>: 
	I0729 18:46:52.978665   84251 main.go:141] libmachine: (newest-cni-903256) KVM machine creation complete!
	I0729 18:46:52.979018   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetConfigRaw
	I0729 18:46:52.979528   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:46:52.979739   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:46:52.979890   84251 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 18:46:52.979905   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetState
	I0729 18:46:52.981387   84251 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 18:46:52.981404   84251 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 18:46:52.981422   84251 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 18:46:52.981431   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:52.984217   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:52.984602   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:52.984621   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:52.984870   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:52.985043   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:52.985232   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:52.985369   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:52.985508   84251 main.go:141] libmachine: Using SSH client type: native
	I0729 18:46:52.985740   84251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:46:52.985755   84251 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 18:46:53.093943   84251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:46:53.093965   84251 main.go:141] libmachine: Detecting the provisioner...
	I0729 18:46:53.093973   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:53.097127   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.097563   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:53.097586   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.097793   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:53.097985   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.098175   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.098317   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:53.098531   84251 main.go:141] libmachine: Using SSH client type: native
	I0729 18:46:53.098688   84251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:46:53.098699   84251 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 18:46:53.208065   84251 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 18:46:53.208140   84251 main.go:141] libmachine: found compatible host: buildroot
	I0729 18:46:53.208150   84251 main.go:141] libmachine: Provisioning with buildroot...
	I0729 18:46:53.208158   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetMachineName
	I0729 18:46:53.208419   84251 buildroot.go:166] provisioning hostname "newest-cni-903256"
	I0729 18:46:53.208446   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetMachineName
	I0729 18:46:53.208656   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:53.211661   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.212055   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:53.212083   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.212277   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:53.212489   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.212670   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.212857   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:53.213022   84251 main.go:141] libmachine: Using SSH client type: native
	I0729 18:46:53.213256   84251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:46:53.213276   84251 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-903256 && echo "newest-cni-903256" | sudo tee /etc/hostname
	I0729 18:46:53.337791   84251 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-903256
	
	I0729 18:46:53.337830   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:53.340983   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.341377   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:53.341404   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.341580   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:53.341786   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.341952   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.342071   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:53.342232   84251 main.go:141] libmachine: Using SSH client type: native
	I0729 18:46:53.342466   84251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:46:53.342490   84251 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-903256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-903256/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-903256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:46:53.460242   84251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:46:53.460278   84251 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:46:53.460330   84251 buildroot.go:174] setting up certificates
	I0729 18:46:53.460348   84251 provision.go:84] configureAuth start
	I0729 18:46:53.460363   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetMachineName
	I0729 18:46:53.460672   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetIP
	I0729 18:46:53.463624   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.464048   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:53.464079   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.464212   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:53.466742   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.467182   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:53.467215   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.467392   84251 provision.go:143] copyHostCerts
	I0729 18:46:53.467449   84251 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:46:53.467462   84251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:46:53.467550   84251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:46:53.467682   84251 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:46:53.467694   84251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:46:53.467731   84251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:46:53.467823   84251 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:46:53.467833   84251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:46:53.467867   84251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:46:53.467947   84251 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.newest-cni-903256 san=[127.0.0.1 192.168.50.148 localhost minikube newest-cni-903256]
	I0729 18:46:53.590406   84251 provision.go:177] copyRemoteCerts
	I0729 18:46:53.590510   84251 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:46:53.590547   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:53.593210   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.593574   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:53.593602   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.593784   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:53.593986   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.594200   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:53.594376   84251 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa Username:docker}
	I0729 18:46:53.681486   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:46:53.709490   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 18:46:53.737477   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:46:53.764179   84251 provision.go:87] duration metric: took 303.817364ms to configureAuth
	I0729 18:46:53.764205   84251 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:46:53.764368   84251 config.go:182] Loaded profile config "newest-cni-903256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:46:53.764428   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:53.767476   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.767902   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:53.767945   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.768173   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:53.768384   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.768551   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.768718   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:53.768894   84251 main.go:141] libmachine: Using SSH client type: native
	I0729 18:46:53.769112   84251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:46:53.769134   84251 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:46:54.050199   84251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:46:54.050232   84251 main.go:141] libmachine: Checking connection to Docker...
	I0729 18:46:54.050243   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetURL
	I0729 18:46:54.051782   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Using libvirt version 6000000
	I0729 18:46:54.054540   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.054892   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:54.054923   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.055102   84251 main.go:141] libmachine: Docker is up and running!
	I0729 18:46:54.055122   84251 main.go:141] libmachine: Reticulating splines...
	I0729 18:46:54.055129   84251 client.go:171] duration metric: took 23.894829086s to LocalClient.Create
	I0729 18:46:54.055162   84251 start.go:167] duration metric: took 23.894902783s to libmachine.API.Create "newest-cni-903256"
	I0729 18:46:54.055173   84251 start.go:293] postStartSetup for "newest-cni-903256" (driver="kvm2")
	I0729 18:46:54.055184   84251 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:46:54.055199   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:46:54.055481   84251 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:46:54.055508   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:54.057713   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.058095   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:54.058116   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.058273   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:54.058567   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:54.058723   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:54.058905   84251 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa Username:docker}
	I0729 18:46:54.145599   84251 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:46:54.150008   84251 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:46:54.150036   84251 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:46:54.150104   84251 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:46:54.150210   84251 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:46:54.150336   84251 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:46:54.160349   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:46:54.186202   84251 start.go:296] duration metric: took 131.017523ms for postStartSetup
	I0729 18:46:54.186252   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetConfigRaw
	I0729 18:46:54.186877   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetIP
	I0729 18:46:54.190687   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.191094   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:54.191123   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.191371   84251 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/config.json ...
	I0729 18:46:54.191580   84251 start.go:128] duration metric: took 24.049117711s to createHost
	I0729 18:46:54.191604   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:54.193915   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.194380   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:54.194410   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.194554   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:54.194734   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:54.194850   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:54.194957   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:54.195072   84251 main.go:141] libmachine: Using SSH client type: native
	I0729 18:46:54.195283   84251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:46:54.195300   84251 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:46:54.307305   84251 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722278814.272489621
	
	I0729 18:46:54.307331   84251 fix.go:216] guest clock: 1722278814.272489621
	I0729 18:46:54.307340   84251 fix.go:229] Guest: 2024-07-29 18:46:54.272489621 +0000 UTC Remote: 2024-07-29 18:46:54.191592989 +0000 UTC m=+24.157710013 (delta=80.896632ms)
	I0729 18:46:54.307364   84251 fix.go:200] guest clock delta is within tolerance: 80.896632ms
	I0729 18:46:54.307370   84251 start.go:83] releasing machines lock for "newest-cni-903256", held for 24.165034232s
	I0729 18:46:54.307392   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:46:54.307689   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetIP
	I0729 18:46:54.310475   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.310823   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:54.310852   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.311090   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:46:54.311579   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:46:54.311741   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:46:54.311852   84251 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:46:54.311902   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:54.311953   84251 ssh_runner.go:195] Run: cat /version.json
	I0729 18:46:54.311970   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:54.314787   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.314919   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.315182   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:54.315227   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.315310   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:54.315311   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:54.315331   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.315497   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:54.315504   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:54.315666   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:54.315666   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:54.315846   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:54.315857   84251 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa Username:docker}
	I0729 18:46:54.315981   84251 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa Username:docker}
	I0729 18:46:54.418629   84251 ssh_runner.go:195] Run: systemctl --version
	I0729 18:46:54.424974   84251 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:46:54.589829   84251 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:46:54.596241   84251 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:46:54.596299   84251 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:46:54.613625   84251 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:46:54.613650   84251 start.go:495] detecting cgroup driver to use...
	I0729 18:46:54.613705   84251 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:46:54.630574   84251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:46:54.644112   84251 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:46:54.644186   84251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:46:54.658500   84251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:46:54.672343   84251 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:46:54.797851   84251 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:46:54.947859   84251 docker.go:233] disabling docker service ...
	I0729 18:46:54.947930   84251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:46:54.962507   84251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:46:54.977006   84251 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:46:55.120883   84251 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:46:55.262063   84251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:46:55.277746   84251 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:46:55.296751   84251 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 18:46:55.296842   84251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:46:55.307376   84251 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:46:55.307443   84251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:46:55.318813   84251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:46:55.331396   84251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:46:55.342410   84251 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:46:55.353158   84251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:46:55.363902   84251 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:46:55.382202   84251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:46:55.394506   84251 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:46:55.404196   84251 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:46:55.404238   84251 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:46:55.418305   84251 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:46:55.428781   84251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:46:55.554744   84251 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:46:55.694512   84251 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:46:55.694569   84251 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:46:55.699802   84251 start.go:563] Will wait 60s for crictl version
	I0729 18:46:55.699864   84251 ssh_runner.go:195] Run: which crictl
	I0729 18:46:55.703876   84251 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:46:55.747949   84251 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:46:55.748032   84251 ssh_runner.go:195] Run: crio --version
	I0729 18:46:55.777800   84251 ssh_runner.go:195] Run: crio --version
	I0729 18:46:55.807768   84251 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 18:46:55.809543   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetIP
	I0729 18:46:55.812449   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:55.812807   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:55.812834   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:55.813071   84251 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 18:46:55.817841   84251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:46:55.831890   84251 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0729 18:46:55.833297   84251 kubeadm.go:883] updating cluster {Name:newest-cni-903256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-903256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:46:55.833427   84251 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 18:46:55.833484   84251 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:46:55.873477   84251 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 18:46:55.873545   84251 ssh_runner.go:195] Run: which lz4
	I0729 18:46:55.877912   84251 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:46:55.882546   84251 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:46:55.882571   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0729 18:46:57.257976   84251 crio.go:462] duration metric: took 1.380106283s to copy over tarball
	I0729 18:46:57.258047   84251 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:46:59.324700   84251 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.066624097s)
	I0729 18:46:59.324730   84251 crio.go:469] duration metric: took 2.066726704s to extract the tarball
	I0729 18:46:59.324739   84251 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:46:59.361113   84251 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:46:59.404541   84251 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:46:59.404562   84251 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:46:59.404572   84251 kubeadm.go:934] updating node { 192.168.50.148 8443 v1.31.0-beta.0 crio true true} ...
	I0729 18:46:59.404686   84251 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-903256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-903256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:46:59.404769   84251 ssh_runner.go:195] Run: crio config
	I0729 18:46:59.456219   84251 cni.go:84] Creating CNI manager for ""
	I0729 18:46:59.456246   84251 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:46:59.456257   84251 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0729 18:46:59.456287   84251 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.148 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-903256 NodeName:newest-cni-903256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.50.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:46:59.456407   84251 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.148
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-903256"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:46:59.456459   84251 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 18:46:59.466666   84251 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:46:59.466723   84251 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:46:59.476108   84251 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0729 18:46:59.494182   84251 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 18:46:59.511503   84251 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0729 18:46:59.528305   84251 ssh_runner.go:195] Run: grep 192.168.50.148	control-plane.minikube.internal$ /etc/hosts
	I0729 18:46:59.532297   84251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:46:59.546812   84251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:46:59.689068   84251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:46:59.707507   84251 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256 for IP: 192.168.50.148
	I0729 18:46:59.707535   84251 certs.go:194] generating shared ca certs ...
	I0729 18:46:59.707556   84251 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:59.707727   84251 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:46:59.707783   84251 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:46:59.707796   84251 certs.go:256] generating profile certs ...
	I0729 18:46:59.707869   84251 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/client.key
	I0729 18:46:59.707886   84251 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/client.crt with IP's: []
	I0729 18:46:59.923871   84251 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/client.crt ...
	I0729 18:46:59.923898   84251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/client.crt: {Name:mk13f9cbc8097dabb8f92c284c8b5b040e870526 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:59.924062   84251 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/client.key ...
	I0729 18:46:59.924072   84251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/client.key: {Name:mkfa20e8be59f28104c177b931c1698c76d724d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:59.924167   84251 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.key.fd2e148c
	I0729 18:46:59.924181   84251 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.crt.fd2e148c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.148]
	I0729 18:46:59.968262   84251 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.crt.fd2e148c ...
	I0729 18:46:59.968289   84251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.crt.fd2e148c: {Name:mk93e8566c6330eb96edb4f77599931071304f61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:59.968436   84251 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.key.fd2e148c ...
	I0729 18:46:59.968447   84251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.key.fd2e148c: {Name:mk72789f9d7dea0d484a222371622b87d0e129d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:59.968517   84251 certs.go:381] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.crt.fd2e148c -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.crt
	I0729 18:46:59.968596   84251 certs.go:385] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.key.fd2e148c -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.key
	I0729 18:46:59.968653   84251 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.key
	I0729 18:46:59.968667   84251 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.crt with IP's: []
	I0729 18:47:00.187389   84251 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.crt ...
	I0729 18:47:00.187416   84251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.crt: {Name:mk35c61426a95294573e08ec36968938eb77df8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:47:00.187598   84251 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.key ...
	I0729 18:47:00.187616   84251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.key: {Name:mkd983dc814956bf3aa671390bb3b5df191d2cfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:47:00.187841   84251 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:47:00.187893   84251 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:47:00.187907   84251 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:47:00.187948   84251 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:47:00.187982   84251 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:47:00.188014   84251 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:47:00.188067   84251 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:47:00.188630   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:47:00.215261   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:47:00.240262   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:47:00.264244   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:47:00.289024   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 18:47:00.315375   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:47:00.341518   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:47:00.367069   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 18:47:00.392914   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:47:00.418428   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:47:00.444113   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:47:00.469085   84251 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:47:00.489671   84251 ssh_runner.go:195] Run: openssl version
	I0729 18:47:00.503876   84251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:47:00.518395   84251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:47:00.523520   84251 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:47:00.523570   84251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:47:00.533762   84251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:47:00.547387   84251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:47:00.559607   84251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:47:00.564806   84251 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:47:00.564867   84251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:47:00.570663   84251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:47:00.582224   84251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:47:00.593191   84251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:47:00.597963   84251 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:47:00.598014   84251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:47:00.604192   84251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:47:00.616489   84251 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:47:00.620696   84251 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 18:47:00.620747   84251 kubeadm.go:392] StartCluster: {Name:newest-cni-903256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-903256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:47:00.620835   84251 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:47:00.620877   84251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:47:00.664843   84251 cri.go:89] found id: ""
	I0729 18:47:00.664921   84251 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:47:00.676750   84251 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:47:00.692609   84251 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:47:00.704473   84251 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:47:00.704495   84251 kubeadm.go:157] found existing configuration files:
	
	I0729 18:47:00.704544   84251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:47:00.714471   84251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:47:00.714523   84251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:47:00.724327   84251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:47:00.733853   84251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:47:00.733918   84251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:47:00.745273   84251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:47:00.756473   84251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:47:00.756527   84251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:47:00.766834   84251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:47:00.776882   84251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:47:00.776946   84251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:47:00.787617   84251 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:47:00.908814   84251 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 18:47:00.908880   84251 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:47:01.042350   84251 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:47:01.042565   84251 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:47:01.042705   84251 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 18:47:01.055990   84251 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:47:01.068492   84251 out.go:204]   - Generating certificates and keys ...
	I0729 18:47:01.068633   84251 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:47:01.068786   84251 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:47:01.377003   84251 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 18:47:01.640875   84251 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 18:47:01.832596   84251 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 18:47:02.012884   84251 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 18:47:02.194005   84251 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 18:47:02.194438   84251 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-903256] and IPs [192.168.50.148 127.0.0.1 ::1]
	I0729 18:47:02.411043   84251 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 18:47:02.411652   84251 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-903256] and IPs [192.168.50.148 127.0.0.1 ::1]
	I0729 18:47:02.607561   84251 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 18:47:02.737179   84251 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 18:47:02.812883   84251 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 18:47:02.813206   84251 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:47:02.936209   84251 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:47:03.126551   84251 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:47:03.209779   84251 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:47:03.341381   84251 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:47:03.595883   84251 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:47:03.596658   84251 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:47:03.600803   84251 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:47:03.603419   84251 out.go:204]   - Booting up control plane ...
	I0729 18:47:03.603527   84251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:47:03.603624   84251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:47:03.603719   84251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:47:03.620562   84251 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:47:03.626338   84251 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:47:03.626409   84251 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:47:03.760921   84251 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:47:03.761040   84251 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:47:04.761578   84251 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001501919s
	I0729 18:47:04.761696   84251 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:47:09.765313   84251 kubeadm.go:310] [api-check] The API server is healthy after 5.004480532s
	I0729 18:47:09.789440   84251 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:47:09.833264   84251 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:47:09.867921   84251 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:47:09.868097   84251 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-903256 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:47:09.881128   84251 kubeadm.go:310] [bootstrap-token] Using token: t1908v.ayn7d9z9sdth0t8s
	I0729 18:47:09.882483   84251 out.go:204]   - Configuring RBAC rules ...
	I0729 18:47:09.882630   84251 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:47:09.891734   84251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:47:09.904043   84251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:47:09.907805   84251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:47:09.912748   84251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:47:09.926428   84251 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:47:10.170825   84251 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:47:10.625352   84251 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:47:11.169462   84251 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:47:11.170492   84251 kubeadm.go:310] 
	I0729 18:47:11.170573   84251 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:47:11.170584   84251 kubeadm.go:310] 
	I0729 18:47:11.170698   84251 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:47:11.170714   84251 kubeadm.go:310] 
	I0729 18:47:11.170736   84251 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:47:11.170813   84251 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:47:11.170901   84251 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:47:11.170910   84251 kubeadm.go:310] 
	I0729 18:47:11.170986   84251 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:47:11.170994   84251 kubeadm.go:310] 
	I0729 18:47:11.171064   84251 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:47:11.171085   84251 kubeadm.go:310] 
	I0729 18:47:11.171187   84251 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:47:11.171294   84251 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:47:11.171393   84251 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:47:11.171427   84251 kubeadm.go:310] 
	I0729 18:47:11.171549   84251 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:47:11.171650   84251 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:47:11.171660   84251 kubeadm.go:310] 
	I0729 18:47:11.171761   84251 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t1908v.ayn7d9z9sdth0t8s \
	I0729 18:47:11.171891   84251 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 18:47:11.171921   84251 kubeadm.go:310] 	--control-plane 
	I0729 18:47:11.171930   84251 kubeadm.go:310] 
	I0729 18:47:11.172065   84251 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:47:11.172076   84251 kubeadm.go:310] 
	I0729 18:47:11.172180   84251 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t1908v.ayn7d9z9sdth0t8s \
	I0729 18:47:11.172326   84251 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 18:47:11.173205   84251 kubeadm.go:310] W0729 18:47:00.875986     845 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 18:47:11.173499   84251 kubeadm.go:310] W0729 18:47:00.877121     845 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 18:47:11.173649   84251 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:47:11.173681   84251 cni.go:84] Creating CNI manager for ""
	I0729 18:47:11.173693   84251 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:47:11.175532   84251 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:47:11.177124   84251 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:47:11.188139   84251 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:47:11.207839   84251 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:47:11.207962   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:11.207980   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-903256 minikube.k8s.io/updated_at=2024_07_29T18_47_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=newest-cni-903256 minikube.k8s.io/primary=true
	I0729 18:47:11.437645   84251 ops.go:34] apiserver oom_adj: -16
	I0729 18:47:11.437808   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:11.938709   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:12.438402   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:12.937973   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:13.438499   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:13.938485   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:14.437929   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:14.938783   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:15.438557   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:15.571843   84251 kubeadm.go:1113] duration metric: took 4.363925544s to wait for elevateKubeSystemPrivileges
	I0729 18:47:15.571892   84251 kubeadm.go:394] duration metric: took 14.951145529s to StartCluster
	I0729 18:47:15.571914   84251 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:47:15.572001   84251 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:47:15.574767   84251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:47:15.575029   84251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 18:47:15.575087   84251 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:47:15.575163   84251 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:47:15.575252   84251 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-903256"
	I0729 18:47:15.575286   84251 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-903256"
	I0729 18:47:15.575298   84251 config.go:182] Loaded profile config "newest-cni-903256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:47:15.575349   84251 host.go:66] Checking if "newest-cni-903256" exists ...
	I0729 18:47:15.575348   84251 addons.go:69] Setting default-storageclass=true in profile "newest-cni-903256"
	I0729 18:47:15.575401   84251 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-903256"
	I0729 18:47:15.575790   84251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:47:15.575809   84251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:47:15.575821   84251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:47:15.575837   84251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:47:15.576657   84251 out.go:177] * Verifying Kubernetes components...
	I0729 18:47:15.577924   84251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:47:15.592493   84251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40269
	I0729 18:47:15.592495   84251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35325
	I0729 18:47:15.593167   84251 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:47:15.593402   84251 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:47:15.593924   84251 main.go:141] libmachine: Using API Version  1
	I0729 18:47:15.593945   84251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:47:15.594060   84251 main.go:141] libmachine: Using API Version  1
	I0729 18:47:15.594080   84251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:47:15.594287   84251 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:47:15.594422   84251 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:47:15.594547   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetState
	I0729 18:47:15.594898   84251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:47:15.594921   84251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:47:15.598935   84251 addons.go:234] Setting addon default-storageclass=true in "newest-cni-903256"
	I0729 18:47:15.598979   84251 host.go:66] Checking if "newest-cni-903256" exists ...
	I0729 18:47:15.599344   84251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:47:15.599388   84251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:47:15.615129   84251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40453
	I0729 18:47:15.615641   84251 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:47:15.616206   84251 main.go:141] libmachine: Using API Version  1
	I0729 18:47:15.616231   84251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:47:15.616609   84251 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:47:15.616824   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetState
	I0729 18:47:15.616992   84251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
	I0729 18:47:15.617359   84251 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:47:15.617923   84251 main.go:141] libmachine: Using API Version  1
	I0729 18:47:15.617940   84251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:47:15.618449   84251 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:47:15.619034   84251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:47:15.619060   84251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:47:15.619282   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:47:15.621024   84251 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:47:15.622287   84251 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:47:15.622302   84251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:47:15.622316   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:47:15.625615   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:15.626080   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:15.626110   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:15.626417   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:47:15.626661   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:15.626917   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:47:15.627114   84251 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa Username:docker}
	I0729 18:47:15.635160   84251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0729 18:47:15.635829   84251 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:47:15.636470   84251 main.go:141] libmachine: Using API Version  1
	I0729 18:47:15.636491   84251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:47:15.636895   84251 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:47:15.637084   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetState
	I0729 18:47:15.638538   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:47:15.638758   84251 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:47:15.638773   84251 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:47:15.638789   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:47:15.641560   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:15.641940   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:15.641980   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:15.642295   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:47:15.642505   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:15.642654   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:47:15.642798   84251 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa Username:docker}
	I0729 18:47:15.782838   84251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 18:47:15.836149   84251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:47:16.054209   84251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:47:16.057974   84251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:47:16.466223   84251 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0729 18:47:16.466426   84251 main.go:141] libmachine: Making call to close driver server
	I0729 18:47:16.466466   84251 main.go:141] libmachine: (newest-cni-903256) Calling .Close
	I0729 18:47:16.466805   84251 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:47:16.466822   84251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:47:16.466822   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Closing plugin on server side
	I0729 18:47:16.466832   84251 main.go:141] libmachine: Making call to close driver server
	I0729 18:47:16.466841   84251 main.go:141] libmachine: (newest-cni-903256) Calling .Close
	I0729 18:47:16.467129   84251 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:47:16.467174   84251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:47:16.467171   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Closing plugin on server side
	I0729 18:47:16.468222   84251 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:47:16.468281   84251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:47:16.488188   84251 main.go:141] libmachine: Making call to close driver server
	I0729 18:47:16.488206   84251 main.go:141] libmachine: (newest-cni-903256) Calling .Close
	I0729 18:47:16.488571   84251 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:47:16.488589   84251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:47:16.488596   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Closing plugin on server side
	I0729 18:47:16.972086   84251 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-903256" context rescaled to 1 replicas
	I0729 18:47:17.158121   84251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.100080696s)
	I0729 18:47:17.158168   84251 api_server.go:72] duration metric: took 1.583043597s to wait for apiserver process to appear ...
	I0729 18:47:17.158190   84251 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:47:17.158212   84251 api_server.go:253] Checking apiserver healthz at https://192.168.50.148:8443/healthz ...
	I0729 18:47:17.158189   84251 main.go:141] libmachine: Making call to close driver server
	I0729 18:47:17.158296   84251 main.go:141] libmachine: (newest-cni-903256) Calling .Close
	I0729 18:47:17.158711   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Closing plugin on server side
	I0729 18:47:17.158753   84251 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:47:17.158762   84251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:47:17.158776   84251 main.go:141] libmachine: Making call to close driver server
	I0729 18:47:17.158784   84251 main.go:141] libmachine: (newest-cni-903256) Calling .Close
	I0729 18:47:17.159194   84251 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:47:17.159212   84251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:47:17.159220   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Closing plugin on server side
	I0729 18:47:17.160884   84251 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0729 18:47:17.162070   84251 addons.go:510] duration metric: took 1.586906762s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0729 18:47:17.175977   84251 api_server.go:279] https://192.168.50.148:8443/healthz returned 200:
	ok
	I0729 18:47:17.179364   84251 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 18:47:17.179393   84251 api_server.go:131] duration metric: took 21.195203ms to wait for apiserver health ...
	I0729 18:47:17.179401   84251 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:47:17.202020   84251 system_pods.go:59] 8 kube-system pods found
	I0729 18:47:17.202103   84251 system_pods.go:61] "coredns-5cfdc65f69-p6wlm" [3052e7d5-bdfd-4118-b8b6-72945d493f25] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:47:17.202115   84251 system_pods.go:61] "coredns-5cfdc65f69-qtk95" [62593f19-4cb2-4f6e-8ddf-d375569d07e3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:47:17.202132   84251 system_pods.go:61] "etcd-newest-cni-903256" [4cd7ea62-6212-47df-9abb-5e48bcb73b28] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:47:17.202139   84251 system_pods.go:61] "kube-apiserver-newest-cni-903256" [ec79f34c-4b0a-46ee-9d78-7545e20b33d5] Running
	I0729 18:47:17.202152   84251 system_pods.go:61] "kube-controller-manager-newest-cni-903256" [548b9248-c25d-4ac7-bd55-0f20614a9640] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:47:17.202166   84251 system_pods.go:61] "kube-proxy-x7f5t" [658c7d91-ce7a-40d4-93d1-731444281915] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 18:47:17.202174   84251 system_pods.go:61] "kube-scheduler-newest-cni-903256" [4e642140-ca06-42e5-963c-87d8a5be6f53] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:47:17.202182   84251 system_pods.go:61] "storage-provisioner" [b5577ca0-0b55-4c46-ab7c-3f5da0a62c72] Pending
	I0729 18:47:17.202190   84251 system_pods.go:74] duration metric: took 22.783633ms to wait for pod list to return data ...
	I0729 18:47:17.202199   84251 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:47:17.214888   84251 default_sa.go:45] found service account: "default"
	I0729 18:47:17.214917   84251 default_sa.go:55] duration metric: took 12.71274ms for default service account to be created ...
	I0729 18:47:17.214929   84251 kubeadm.go:582] duration metric: took 1.639808682s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 18:47:17.214944   84251 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:47:17.222488   84251 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:47:17.222511   84251 node_conditions.go:123] node cpu capacity is 2
	I0729 18:47:17.222523   84251 node_conditions.go:105] duration metric: took 7.574074ms to run NodePressure ...
	I0729 18:47:17.222533   84251 start.go:241] waiting for startup goroutines ...
	I0729 18:47:17.222539   84251 start.go:246] waiting for cluster config update ...
	I0729 18:47:17.222549   84251 start.go:255] writing updated cluster config ...
	I0729 18:47:17.222860   84251 ssh_runner.go:195] Run: rm -f paused
	I0729 18:47:17.276577   84251 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 18:47:17.278711   84251 out.go:177] * Done! kubectl is now configured to use "newest-cni-903256" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.790338427Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e587e493-6afe-4694-bec6-ce68a74f621a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.790659834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b7c5ae6c21d8f1d4f5a76a7b2b00dee58ba86c016e42d5cd9ce48f176d17841a,PodSandboxId:82026e4cbebb5982849c683056b2d4c9434dad95444553ce97c5cbae66293adc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277919246250692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wztpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7,},Annotations:map[string]string{io.kubernetes.container.hash: 9db37acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fed3fc2d0d695c51ec9efec220330db6860a605adc586d580af4c01807311ff,PodSandboxId:04fd5d7fe81c81684f1b7141c3faef454f9405126ae412a4854a6188fcfb611b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277919205954883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpnfg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 687cbc8f-370a-4b72-bc1c-6ae36efe890e,},Annotations:map[string]string{io.kubernetes.container.hash: d69d4181,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e402548bbd184a3a7181b5b5ca203f0a39d4bbd2f8d1914859e6a313e39f3e2b,PodSandboxId:ce759b42015c7e603f9ebd6843740a9d52aae7491d7e28054e8fb0fb266bbd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722277919153466747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0b1e31d-9b5c-4e82-aea7-56184832c053,},Annotations:map[string]string{io.kubernetes.container.hash: 5369b1b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2ce2ac925a3585ef4fcd922a7c109fe2a4d0d04b242caab76127ae6deb93e4,PodSandboxId:f72958198ab02cacdfc9e0d6c64c0c78c7cfc66f1c62d82ad78cf21d5cfa247e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722277917569074128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxf5z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ed1812-b3bf-429d-b8f1-bdccb3415fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b2e8647d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37921ddb40291c04357e83617a73e854a3aafeff689969b204c711a6d7ae42fc,PodSandboxId:7a83e374fcfcad04cf8591c3f238d286e1dce8a7dc19cd687620753f3cbba4ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277898349454609,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092843ca16e3768154d9eaefe813d4c4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2635bb0eb62d0c9d915e1779b6f1295ca78236982bffa8667c094b32b7ef83d1,PodSandboxId:a52ce0bf9b3060f373ba4f8f1f53f58dfb0392645383900cbcb929df59ac830c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277898341865986,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f630b5e9caeb1a971fb8d4ec7f20523,},Annotations:map[string]string{io.kubernetes.container.hash: 4d95585c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b4971b286aa874f2e68c05351d5ab5e550733aa9de2cf7f91f6ee982e33501,PodSandboxId:bf85cb0dd5956d55683ad1694e3566c2eb05aaa501aec3eed08d8c988b9af21b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277898334151449,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e31ec24c194f37219d0b834f527350d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555efbbd128acb28965dc1362a9e9e494a071469264ca202038e1c023a8e83d5,PodSandboxId:9711aadb24fa2551861a07b8e7ba700abac903a5e499e2290da5f8281a2d5db6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277898237354843,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85711970cfb58ff1e43c65ebe2b0ea9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a911c7f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e587e493-6afe-4694-bec6-ce68a74f621a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.832218075Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a128bdf-c22f-4098-9393-b93aa88f8659 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.832376310Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a128bdf-c22f-4098-9393-b93aa88f8659 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.834600289Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c14a596-bb49-480c-aa65-6a3c681a66a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.835084878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278848835059641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c14a596-bb49-480c-aa65-6a3c681a66a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.836092605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0f63ed9-531c-430f-8ca9-0c5bb752aa77 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.836157429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0f63ed9-531c-430f-8ca9-0c5bb752aa77 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.838165210Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b7c5ae6c21d8f1d4f5a76a7b2b00dee58ba86c016e42d5cd9ce48f176d17841a,PodSandboxId:82026e4cbebb5982849c683056b2d4c9434dad95444553ce97c5cbae66293adc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277919246250692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wztpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7,},Annotations:map[string]string{io.kubernetes.container.hash: 9db37acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fed3fc2d0d695c51ec9efec220330db6860a605adc586d580af4c01807311ff,PodSandboxId:04fd5d7fe81c81684f1b7141c3faef454f9405126ae412a4854a6188fcfb611b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277919205954883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpnfg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 687cbc8f-370a-4b72-bc1c-6ae36efe890e,},Annotations:map[string]string{io.kubernetes.container.hash: d69d4181,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e402548bbd184a3a7181b5b5ca203f0a39d4bbd2f8d1914859e6a313e39f3e2b,PodSandboxId:ce759b42015c7e603f9ebd6843740a9d52aae7491d7e28054e8fb0fb266bbd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722277919153466747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0b1e31d-9b5c-4e82-aea7-56184832c053,},Annotations:map[string]string{io.kubernetes.container.hash: 5369b1b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2ce2ac925a3585ef4fcd922a7c109fe2a4d0d04b242caab76127ae6deb93e4,PodSandboxId:f72958198ab02cacdfc9e0d6c64c0c78c7cfc66f1c62d82ad78cf21d5cfa247e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722277917569074128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxf5z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ed1812-b3bf-429d-b8f1-bdccb3415fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b2e8647d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37921ddb40291c04357e83617a73e854a3aafeff689969b204c711a6d7ae42fc,PodSandboxId:7a83e374fcfcad04cf8591c3f238d286e1dce8a7dc19cd687620753f3cbba4ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277898349454609,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092843ca16e3768154d9eaefe813d4c4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2635bb0eb62d0c9d915e1779b6f1295ca78236982bffa8667c094b32b7ef83d1,PodSandboxId:a52ce0bf9b3060f373ba4f8f1f53f58dfb0392645383900cbcb929df59ac830c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277898341865986,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f630b5e9caeb1a971fb8d4ec7f20523,},Annotations:map[string]string{io.kubernetes.container.hash: 4d95585c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b4971b286aa874f2e68c05351d5ab5e550733aa9de2cf7f91f6ee982e33501,PodSandboxId:bf85cb0dd5956d55683ad1694e3566c2eb05aaa501aec3eed08d8c988b9af21b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277898334151449,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e31ec24c194f37219d0b834f527350d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555efbbd128acb28965dc1362a9e9e494a071469264ca202038e1c023a8e83d5,PodSandboxId:9711aadb24fa2551861a07b8e7ba700abac903a5e499e2290da5f8281a2d5db6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277898237354843,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85711970cfb58ff1e43c65ebe2b0ea9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a911c7f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0f63ed9-531c-430f-8ca9-0c5bb752aa77 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.883612223Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=921d941a-cf6d-4c5f-80d1-b7d464079707 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.883741011Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=921d941a-cf6d-4c5f-80d1-b7d464079707 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.885391532Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b200202-416f-4d6f-a1a3-0a256671bb9d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.886221135Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278848886193225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b200202-416f-4d6f-a1a3-0a256671bb9d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.887341138Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef6e3f82-882b-4eea-aa69-0efb10b6b712 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.887417684Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef6e3f82-882b-4eea-aa69-0efb10b6b712 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.887594258Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b7c5ae6c21d8f1d4f5a76a7b2b00dee58ba86c016e42d5cd9ce48f176d17841a,PodSandboxId:82026e4cbebb5982849c683056b2d4c9434dad95444553ce97c5cbae66293adc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277919246250692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wztpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7,},Annotations:map[string]string{io.kubernetes.container.hash: 9db37acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fed3fc2d0d695c51ec9efec220330db6860a605adc586d580af4c01807311ff,PodSandboxId:04fd5d7fe81c81684f1b7141c3faef454f9405126ae412a4854a6188fcfb611b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277919205954883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpnfg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 687cbc8f-370a-4b72-bc1c-6ae36efe890e,},Annotations:map[string]string{io.kubernetes.container.hash: d69d4181,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e402548bbd184a3a7181b5b5ca203f0a39d4bbd2f8d1914859e6a313e39f3e2b,PodSandboxId:ce759b42015c7e603f9ebd6843740a9d52aae7491d7e28054e8fb0fb266bbd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722277919153466747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0b1e31d-9b5c-4e82-aea7-56184832c053,},Annotations:map[string]string{io.kubernetes.container.hash: 5369b1b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2ce2ac925a3585ef4fcd922a7c109fe2a4d0d04b242caab76127ae6deb93e4,PodSandboxId:f72958198ab02cacdfc9e0d6c64c0c78c7cfc66f1c62d82ad78cf21d5cfa247e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722277917569074128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxf5z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ed1812-b3bf-429d-b8f1-bdccb3415fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b2e8647d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37921ddb40291c04357e83617a73e854a3aafeff689969b204c711a6d7ae42fc,PodSandboxId:7a83e374fcfcad04cf8591c3f238d286e1dce8a7dc19cd687620753f3cbba4ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277898349454609,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092843ca16e3768154d9eaefe813d4c4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2635bb0eb62d0c9d915e1779b6f1295ca78236982bffa8667c094b32b7ef83d1,PodSandboxId:a52ce0bf9b3060f373ba4f8f1f53f58dfb0392645383900cbcb929df59ac830c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277898341865986,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f630b5e9caeb1a971fb8d4ec7f20523,},Annotations:map[string]string{io.kubernetes.container.hash: 4d95585c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b4971b286aa874f2e68c05351d5ab5e550733aa9de2cf7f91f6ee982e33501,PodSandboxId:bf85cb0dd5956d55683ad1694e3566c2eb05aaa501aec3eed08d8c988b9af21b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277898334151449,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e31ec24c194f37219d0b834f527350d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555efbbd128acb28965dc1362a9e9e494a071469264ca202038e1c023a8e83d5,PodSandboxId:9711aadb24fa2551861a07b8e7ba700abac903a5e499e2290da5f8281a2d5db6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277898237354843,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85711970cfb58ff1e43c65ebe2b0ea9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a911c7f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef6e3f82-882b-4eea-aa69-0efb10b6b712 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.903426920Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=ce7e7c8b-d0df-459c-8476-64d25efb0604 name=/runtime.v1.RuntimeService/Status
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.903534853Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=ce7e7c8b-d0df-459c-8476-64d25efb0604 name=/runtime.v1.RuntimeService/Status
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.930485041Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50cb9f5c-755b-48e8-a245-644909f0adfe name=/runtime.v1.RuntimeService/Version
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.930562840Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50cb9f5c-755b-48e8-a245-644909f0adfe name=/runtime.v1.RuntimeService/Version
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.931799939Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed3e5c7a-b97e-4fe9-a275-05a8b155517d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.932345654Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278848932320662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed3e5c7a-b97e-4fe9-a275-05a8b155517d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.932849836Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=073ea311-5f71-4d41-a1b8-77e7dd0765ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.932902306Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=073ea311-5f71-4d41-a1b8-77e7dd0765ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:28 embed-certs-409322 crio[736]: time="2024-07-29 18:47:28.933125063Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b7c5ae6c21d8f1d4f5a76a7b2b00dee58ba86c016e42d5cd9ce48f176d17841a,PodSandboxId:82026e4cbebb5982849c683056b2d4c9434dad95444553ce97c5cbae66293adc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277919246250692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wztpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7,},Annotations:map[string]string{io.kubernetes.container.hash: 9db37acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fed3fc2d0d695c51ec9efec220330db6860a605adc586d580af4c01807311ff,PodSandboxId:04fd5d7fe81c81684f1b7141c3faef454f9405126ae412a4854a6188fcfb611b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277919205954883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpnfg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 687cbc8f-370a-4b72-bc1c-6ae36efe890e,},Annotations:map[string]string{io.kubernetes.container.hash: d69d4181,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e402548bbd184a3a7181b5b5ca203f0a39d4bbd2f8d1914859e6a313e39f3e2b,PodSandboxId:ce759b42015c7e603f9ebd6843740a9d52aae7491d7e28054e8fb0fb266bbd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722277919153466747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0b1e31d-9b5c-4e82-aea7-56184832c053,},Annotations:map[string]string{io.kubernetes.container.hash: 5369b1b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2ce2ac925a3585ef4fcd922a7c109fe2a4d0d04b242caab76127ae6deb93e4,PodSandboxId:f72958198ab02cacdfc9e0d6c64c0c78c7cfc66f1c62d82ad78cf21d5cfa247e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722277917569074128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxf5z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ed1812-b3bf-429d-b8f1-bdccb3415fb5,},Annotations:map[string]string{io.kubernetes.container.hash: b2e8647d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37921ddb40291c04357e83617a73e854a3aafeff689969b204c711a6d7ae42fc,PodSandboxId:7a83e374fcfcad04cf8591c3f238d286e1dce8a7dc19cd687620753f3cbba4ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277898349454609,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092843ca16e3768154d9eaefe813d4c4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2635bb0eb62d0c9d915e1779b6f1295ca78236982bffa8667c094b32b7ef83d1,PodSandboxId:a52ce0bf9b3060f373ba4f8f1f53f58dfb0392645383900cbcb929df59ac830c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277898341865986,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f630b5e9caeb1a971fb8d4ec7f20523,},Annotations:map[string]string{io.kubernetes.container.hash: 4d95585c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b4971b286aa874f2e68c05351d5ab5e550733aa9de2cf7f91f6ee982e33501,PodSandboxId:bf85cb0dd5956d55683ad1694e3566c2eb05aaa501aec3eed08d8c988b9af21b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277898334151449,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e31ec24c194f37219d0b834f527350d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555efbbd128acb28965dc1362a9e9e494a071469264ca202038e1c023a8e83d5,PodSandboxId:9711aadb24fa2551861a07b8e7ba700abac903a5e499e2290da5f8281a2d5db6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277898237354843,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-409322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85711970cfb58ff1e43c65ebe2b0ea9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a911c7f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=073ea311-5f71-4d41-a1b8-77e7dd0765ae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b7c5ae6c21d8f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   82026e4cbebb5       coredns-7db6d8ff4d-wztpj
	2fed3fc2d0d69       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   04fd5d7fe81c8       coredns-7db6d8ff4d-wpnfg
	e402548bbd184       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   ce759b42015c7       storage-provisioner
	bc2ce2ac925a3       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   15 minutes ago      Running             kube-proxy                0                   f72958198ab02       kube-proxy-kxf5z
	37921ddb40291       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   15 minutes ago      Running             kube-scheduler            2                   7a83e374fcfca       kube-scheduler-embed-certs-409322
	2635bb0eb62d0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   a52ce0bf9b306       etcd-embed-certs-409322
	88b4971b286aa       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   15 minutes ago      Running             kube-controller-manager   2                   bf85cb0dd5956       kube-controller-manager-embed-certs-409322
	555efbbd128ac       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   15 minutes ago      Running             kube-apiserver            2                   9711aadb24fa2       kube-apiserver-embed-certs-409322
	
	
	==> coredns [2fed3fc2d0d695c51ec9efec220330db6860a605adc586d580af4c01807311ff] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b7c5ae6c21d8f1d4f5a76a7b2b00dee58ba86c016e42d5cd9ce48f176d17841a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-409322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-409322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=embed-certs-409322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T18_31_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:31:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-409322
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:47:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:47:23 +0000   Mon, 29 Jul 2024 18:31:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:47:23 +0000   Mon, 29 Jul 2024 18:31:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:47:23 +0000   Mon, 29 Jul 2024 18:31:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:47:23 +0000   Mon, 29 Jul 2024 18:31:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    embed-certs-409322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 77f2a8d4ed7d4c0f9d36bd1e29b0175a
	  System UUID:                77f2a8d4-ed7d-4c0f-9d36-bd1e29b0175a
	  Boot ID:                    ab577673-01e5-4ce5-b335-2d04fd2b473f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-wpnfg                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-wztpj                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-409322                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-409322             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-409322    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-kxf5z                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-409322             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-6q4nl               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-409322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-409322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-409322 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-409322 event: Registered Node embed-certs-409322 in Controller
	
	
	==> dmesg <==
	[  +0.049912] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039411] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.774653] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.424535] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.605559] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.855547] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.058726] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075565] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.178114] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.149484] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.283176] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[  +4.339128] systemd-fstab-generator[816]: Ignoring "noauto" option for root device
	[  +0.060783] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.160800] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +5.624007] kauditd_printk_skb: 97 callbacks suppressed
	[Jul29 18:27] kauditd_printk_skb: 84 callbacks suppressed
	[Jul29 18:31] kauditd_printk_skb: 10 callbacks suppressed
	[  +1.556850] systemd-fstab-generator[3623]: Ignoring "noauto" option for root device
	[  +6.046141] systemd-fstab-generator[3947]: Ignoring "noauto" option for root device
	[  +0.086239] kauditd_printk_skb: 53 callbacks suppressed
	[ +14.081215] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.316583] systemd-fstab-generator[4242]: Ignoring "noauto" option for root device
	[Jul29 18:32] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [2635bb0eb62d0c9d915e1779b6f1295ca78236982bffa8667c094b32b7ef83d1] <==
	{"level":"info","ts":"2024-07-29T18:31:38.749512Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"91c640bc00cd2aea","local-member-id":"ded7f9817c909548","added-peer-id":"ded7f9817c909548","added-peer-peer-urls":["https://192.168.39.58:2380"]}
	{"level":"info","ts":"2024-07-29T18:31:39.392501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T18:31:39.392642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T18:31:39.392681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 received MsgPreVoteResp from ded7f9817c909548 at term 1"}
	{"level":"info","ts":"2024-07-29T18:31:39.392698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T18:31:39.392754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 received MsgVoteResp from ded7f9817c909548 at term 2"}
	{"level":"info","ts":"2024-07-29T18:31:39.392782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T18:31:39.392791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ded7f9817c909548 elected leader ded7f9817c909548 at term 2"}
	{"level":"info","ts":"2024-07-29T18:31:39.397374Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:31:39.400475Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ded7f9817c909548","local-member-attributes":"{Name:embed-certs-409322 ClientURLs:[https://192.168.39.58:2379]}","request-path":"/0/members/ded7f9817c909548/attributes","cluster-id":"91c640bc00cd2aea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:31:39.400789Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:31:39.405456Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:31:39.405704Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T18:31:39.405735Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T18:31:39.40915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T18:31:39.420797Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"91c640bc00cd2aea","local-member-id":"ded7f9817c909548","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:31:39.420902Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:31:39.420924Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:31:39.440313Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.58:2379"}
	{"level":"info","ts":"2024-07-29T18:41:39.458053Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":681}
	{"level":"info","ts":"2024-07-29T18:41:39.468059Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":681,"took":"9.528261ms","hash":452270519,"current-db-size-bytes":2134016,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2134016,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-07-29T18:41:39.468116Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":452270519,"revision":681,"compact-revision":-1}
	{"level":"info","ts":"2024-07-29T18:46:39.467499Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":924}
	{"level":"info","ts":"2024-07-29T18:46:39.472425Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":924,"took":"4.427567ms","hash":3218967066,"current-db-size-bytes":2134016,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1519616,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-29T18:46:39.472502Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3218967066,"revision":924,"compact-revision":681}
	
	
	==> kernel <==
	 18:47:29 up 21 min,  0 users,  load average: 0.40, 0.24, 0.15
	Linux embed-certs-409322 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [555efbbd128acb28965dc1362a9e9e494a071469264ca202038e1c023a8e83d5] <==
	I0729 18:41:42.090578       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:42:42.090348       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:42:42.090431       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 18:42:42.090440       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:42:42.091663       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:42:42.091732       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 18:42:42.091740       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:44:42.091049       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:44:42.091170       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 18:44:42.091183       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:44:42.092334       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:44:42.092474       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 18:44:42.092512       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:46:41.093904       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:46:41.094370       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 18:46:42.094618       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:46:42.094704       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 18:46:42.094712       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:46:42.094794       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 18:46:42.094827       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 18:46:42.095889       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [88b4971b286aa874f2e68c05351d5ab5e550733aa9de2cf7f91f6ee982e33501] <==
	I0729 18:41:57.187210       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:42:26.691898       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:42:27.195827       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:42:56.697356       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:42:57.205732       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 18:43:16.617153       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="403.403µs"
	E0729 18:43:26.702387       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:43:27.213519       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 18:43:27.615537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="86.363µs"
	E0729 18:43:56.708264       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:43:57.222102       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:44:26.713926       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:44:27.230274       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:44:56.720221       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:44:57.237849       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:45:26.725510       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:45:27.245936       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:45:56.730866       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:45:57.253176       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:46:26.737535       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:46:27.262284       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:46:56.745349       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:46:57.272304       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:47:26.750384       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 18:47:27.281471       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [bc2ce2ac925a3585ef4fcd922a7c109fe2a4d0d04b242caab76127ae6deb93e4] <==
	I0729 18:31:57.863830       1 server_linux.go:69] "Using iptables proxy"
	I0729 18:31:57.873512       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.58"]
	I0729 18:31:57.936856       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:31:57.936919       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:31:57.937015       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:31:57.941544       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:31:57.941742       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:31:57.941772       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:31:57.947209       1 config.go:192] "Starting service config controller"
	I0729 18:31:57.947285       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:31:57.947413       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:31:57.947530       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:31:57.948765       1 config.go:319] "Starting node config controller"
	I0729 18:31:57.950921       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:31:58.047953       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:31:58.048063       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:31:58.051924       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [37921ddb40291c04357e83617a73e854a3aafeff689969b204c711a6d7ae42fc] <==
	W0729 18:31:41.968751       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 18:31:41.968802       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 18:31:41.988224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 18:31:41.988309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 18:31:42.032666       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 18:31:42.032845       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 18:31:42.171536       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 18:31:42.171584       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 18:31:42.257675       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 18:31:42.257922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 18:31:42.301498       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 18:31:42.301593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 18:31:42.366773       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 18:31:42.367234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 18:31:42.424857       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 18:31:42.425126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 18:31:42.425617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 18:31:42.425655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 18:31:42.437416       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 18:31:42.437554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 18:31:42.453696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 18:31:42.453817       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 18:31:42.567547       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 18:31:42.567685       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 18:31:44.821584       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 18:44:43 embed-certs-409322 kubelet[3954]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:44:43 embed-certs-409322 kubelet[3954]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:44:52 embed-certs-409322 kubelet[3954]: E0729 18:44:52.600161    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:45:06 embed-certs-409322 kubelet[3954]: E0729 18:45:06.599182    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:45:19 embed-certs-409322 kubelet[3954]: E0729 18:45:19.601256    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:45:31 embed-certs-409322 kubelet[3954]: E0729 18:45:31.599589    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:45:43 embed-certs-409322 kubelet[3954]: E0729 18:45:43.645536    3954 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:45:43 embed-certs-409322 kubelet[3954]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:45:43 embed-certs-409322 kubelet[3954]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:45:43 embed-certs-409322 kubelet[3954]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:45:43 embed-certs-409322 kubelet[3954]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:45:45 embed-certs-409322 kubelet[3954]: E0729 18:45:45.599080    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:45:56 embed-certs-409322 kubelet[3954]: E0729 18:45:56.598876    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:46:10 embed-certs-409322 kubelet[3954]: E0729 18:46:10.599931    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:46:22 embed-certs-409322 kubelet[3954]: E0729 18:46:22.600127    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:46:33 embed-certs-409322 kubelet[3954]: E0729 18:46:33.601849    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:46:43 embed-certs-409322 kubelet[3954]: E0729 18:46:43.646263    3954 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:46:43 embed-certs-409322 kubelet[3954]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:46:43 embed-certs-409322 kubelet[3954]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:46:43 embed-certs-409322 kubelet[3954]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:46:43 embed-certs-409322 kubelet[3954]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:46:47 embed-certs-409322 kubelet[3954]: E0729 18:46:47.599884    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:46:58 embed-certs-409322 kubelet[3954]: E0729 18:46:58.600611    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:47:13 embed-certs-409322 kubelet[3954]: E0729 18:47:13.599560    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	Jul 29 18:47:24 embed-certs-409322 kubelet[3954]: E0729 18:47:24.600681    3954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q4nl" podUID="57dc61cc-7490-49e5-9d03-c81aa5d25aea"
	
	
	==> storage-provisioner [e402548bbd184a3a7181b5b5ca203f0a39d4bbd2f8d1914859e6a313e39f3e2b] <==
	I0729 18:31:59.393488       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 18:31:59.453244       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 18:31:59.453436       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 18:31:59.523367       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 18:31:59.523533       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-409322_c83607fa-9136-4286-b325-60043990567d!
	I0729 18:31:59.542521       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0cd27038-3604-4007-bef6-da9bfed0b48f", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-409322_c83607fa-9136-4286-b325-60043990567d became leader
	I0729 18:31:59.623723       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-409322_c83607fa-9136-4286-b325-60043990567d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-409322 -n embed-certs-409322
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-409322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-6q4nl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-409322 describe pod metrics-server-569cc877fc-6q4nl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-409322 describe pod metrics-server-569cc877fc-6q4nl: exit status 1 (68.012227ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-6q4nl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-409322 describe pod metrics-server-569cc877fc-6q4nl: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (375.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (315.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-888056 -n no-preload-888056
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 18:47:20.907799028 +0000 UTC m=+6697.575167338
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-888056 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-888056 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.745µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-888056 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-888056 -n no-preload-888056
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-888056 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-888056 logs -n 25: (1.242296791s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-729010 sudo find                             | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo crio                             | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-729010                                       | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	| delete  | -p                                                     | disable-driver-mounts-603863 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | disable-driver-mounts-603863                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:19 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-888056             | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-888056                                   | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-409322            | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-502055  | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC |                     |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-386663        | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-888056                  | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-888056 --memory=2200                     | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:33 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-409322                 | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-502055       | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:31 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386663             | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	| start   | -p newest-cni-903256 --memory=2200 --alsologtostderr   | newest-cni-903256            | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:47 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-903256             | newest-cni-903256            | jenkins | v1.33.1 | 29 Jul 24 18:47 UTC | 29 Jul 24 18:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-903256                                   | newest-cni-903256            | jenkins | v1.33.1 | 29 Jul 24 18:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:46:30
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:46:30.068748   84251 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:46:30.068839   84251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:46:30.068843   84251 out.go:304] Setting ErrFile to fd 2...
	I0729 18:46:30.068847   84251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:46:30.069023   84251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:46:30.069574   84251 out.go:298] Setting JSON to false
	I0729 18:46:30.070521   84251 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":8942,"bootTime":1722269848,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:46:30.070576   84251 start.go:139] virtualization: kvm guest
	I0729 18:46:30.072814   84251 out.go:177] * [newest-cni-903256] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:46:30.074119   84251 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 18:46:30.074124   84251 notify.go:220] Checking for updates...
	I0729 18:46:30.075449   84251 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:46:30.076849   84251 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:46:30.078050   84251 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:46:30.079534   84251 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:46:30.080830   84251 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:46:30.082628   84251 config.go:182] Loaded profile config "default-k8s-diff-port-502055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:46:30.082738   84251 config.go:182] Loaded profile config "embed-certs-409322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:46:30.082843   84251 config.go:182] Loaded profile config "no-preload-888056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:46:30.082933   84251 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:46:30.120413   84251 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 18:46:30.121714   84251 start.go:297] selected driver: kvm2
	I0729 18:46:30.121734   84251 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:46:30.121748   84251 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:46:30.122750   84251 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:46:30.122844   84251 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:46:30.138507   84251 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:46:30.138547   84251 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 18:46:30.138572   84251 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 18:46:30.138763   84251 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 18:46:30.138813   84251 cni.go:84] Creating CNI manager for ""
	I0729 18:46:30.138825   84251 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:46:30.138835   84251 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 18:46:30.138890   84251 start.go:340] cluster config:
	{Name:newest-cni-903256 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-903256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:46:30.139003   84251 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:46:30.140640   84251 out.go:177] * Starting "newest-cni-903256" primary control-plane node in "newest-cni-903256" cluster
	I0729 18:46:30.141840   84251 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 18:46:30.141887   84251 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:46:30.141900   84251 cache.go:56] Caching tarball of preloaded images
	I0729 18:46:30.141986   84251 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:46:30.142001   84251 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 18:46:30.142091   84251 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/config.json ...
	I0729 18:46:30.142115   84251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/config.json: {Name:mk0ffbc23f36706df4690c2ad4313143e8f4dddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:30.142272   84251 start.go:360] acquireMachinesLock for newest-cni-903256: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:46:30.142324   84251 start.go:364] duration metric: took 36.267µs to acquireMachinesLock for "newest-cni-903256"
	I0729 18:46:30.142348   84251 start.go:93] Provisioning new machine with config: &{Name:newest-cni-903256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-903256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:46:30.142453   84251 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 18:46:30.144008   84251 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 18:46:30.144197   84251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:46:30.144235   84251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:46:30.158515   84251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I0729 18:46:30.158914   84251 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:46:30.159427   84251 main.go:141] libmachine: Using API Version  1
	I0729 18:46:30.159446   84251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:46:30.159750   84251 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:46:30.159935   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetMachineName
	I0729 18:46:30.160081   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:46:30.160260   84251 start.go:159] libmachine.API.Create for "newest-cni-903256" (driver="kvm2")
	I0729 18:46:30.160291   84251 client.go:168] LocalClient.Create starting
	I0729 18:46:30.160324   84251 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem
	I0729 18:46:30.160367   84251 main.go:141] libmachine: Decoding PEM data...
	I0729 18:46:30.160387   84251 main.go:141] libmachine: Parsing certificate...
	I0729 18:46:30.160462   84251 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem
	I0729 18:46:30.160486   84251 main.go:141] libmachine: Decoding PEM data...
	I0729 18:46:30.160503   84251 main.go:141] libmachine: Parsing certificate...
	I0729 18:46:30.160527   84251 main.go:141] libmachine: Running pre-create checks...
	I0729 18:46:30.160538   84251 main.go:141] libmachine: (newest-cni-903256) Calling .PreCreateCheck
	I0729 18:46:30.160867   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetConfigRaw
	I0729 18:46:30.161218   84251 main.go:141] libmachine: Creating machine...
	I0729 18:46:30.161231   84251 main.go:141] libmachine: (newest-cni-903256) Calling .Create
	I0729 18:46:30.161353   84251 main.go:141] libmachine: (newest-cni-903256) Creating KVM machine...
	I0729 18:46:30.162583   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found existing default KVM network
	I0729 18:46:30.163795   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:30.163643   84274 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ab:5b:77} reservation:<nil>}
	I0729 18:46:30.164793   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:30.164720   84274 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a87b0}
	I0729 18:46:30.164811   84251 main.go:141] libmachine: (newest-cni-903256) DBG | created network xml: 
	I0729 18:46:30.164827   84251 main.go:141] libmachine: (newest-cni-903256) DBG | <network>
	I0729 18:46:30.164845   84251 main.go:141] libmachine: (newest-cni-903256) DBG |   <name>mk-newest-cni-903256</name>
	I0729 18:46:30.164863   84251 main.go:141] libmachine: (newest-cni-903256) DBG |   <dns enable='no'/>
	I0729 18:46:30.164875   84251 main.go:141] libmachine: (newest-cni-903256) DBG |   
	I0729 18:46:30.164886   84251 main.go:141] libmachine: (newest-cni-903256) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0729 18:46:30.164900   84251 main.go:141] libmachine: (newest-cni-903256) DBG |     <dhcp>
	I0729 18:46:30.164914   84251 main.go:141] libmachine: (newest-cni-903256) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0729 18:46:30.164921   84251 main.go:141] libmachine: (newest-cni-903256) DBG |     </dhcp>
	I0729 18:46:30.164930   84251 main.go:141] libmachine: (newest-cni-903256) DBG |   </ip>
	I0729 18:46:30.164935   84251 main.go:141] libmachine: (newest-cni-903256) DBG |   
	I0729 18:46:30.164942   84251 main.go:141] libmachine: (newest-cni-903256) DBG | </network>
	I0729 18:46:30.164949   84251 main.go:141] libmachine: (newest-cni-903256) DBG | 
	I0729 18:46:30.170064   84251 main.go:141] libmachine: (newest-cni-903256) DBG | trying to create private KVM network mk-newest-cni-903256 192.168.50.0/24...
	I0729 18:46:30.242424   84251 main.go:141] libmachine: (newest-cni-903256) Setting up store path in /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256 ...
	I0729 18:46:30.242467   84251 main.go:141] libmachine: (newest-cni-903256) Building disk image from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 18:46:30.242478   84251 main.go:141] libmachine: (newest-cni-903256) DBG | private KVM network mk-newest-cni-903256 192.168.50.0/24 created
	I0729 18:46:30.242498   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:30.242335   84274 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:46:30.242673   84251 main.go:141] libmachine: (newest-cni-903256) Downloading /home/jenkins/minikube-integration/19345-11206/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 18:46:30.494753   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:30.494601   84274 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa...
	I0729 18:46:30.681600   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:30.681500   84274 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/newest-cni-903256.rawdisk...
	I0729 18:46:30.681627   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Writing magic tar header
	I0729 18:46:30.681638   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Writing SSH key tar header
	I0729 18:46:30.681650   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:30.681620   84274 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256 ...
	I0729 18:46:30.681759   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256
	I0729 18:46:30.681785   84251 main.go:141] libmachine: (newest-cni-903256) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256 (perms=drwx------)
	I0729 18:46:30.681797   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube/machines
	I0729 18:46:30.681812   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:46:30.681820   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19345-11206
	I0729 18:46:30.681828   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:46:30.681833   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:46:30.681840   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Checking permissions on dir: /home
	I0729 18:46:30.681848   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Skipping /home - not owner
	I0729 18:46:30.681867   84251 main.go:141] libmachine: (newest-cni-903256) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:46:30.681895   84251 main.go:141] libmachine: (newest-cni-903256) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206/.minikube (perms=drwxr-xr-x)
	I0729 18:46:30.681908   84251 main.go:141] libmachine: (newest-cni-903256) Setting executable bit set on /home/jenkins/minikube-integration/19345-11206 (perms=drwxrwxr-x)
	I0729 18:46:30.681926   84251 main.go:141] libmachine: (newest-cni-903256) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:46:30.681938   84251 main.go:141] libmachine: (newest-cni-903256) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:46:30.681949   84251 main.go:141] libmachine: (newest-cni-903256) Creating domain...
	I0729 18:46:30.683156   84251 main.go:141] libmachine: (newest-cni-903256) define libvirt domain using xml: 
	I0729 18:46:30.683175   84251 main.go:141] libmachine: (newest-cni-903256) <domain type='kvm'>
	I0729 18:46:30.683183   84251 main.go:141] libmachine: (newest-cni-903256)   <name>newest-cni-903256</name>
	I0729 18:46:30.683188   84251 main.go:141] libmachine: (newest-cni-903256)   <memory unit='MiB'>2200</memory>
	I0729 18:46:30.683195   84251 main.go:141] libmachine: (newest-cni-903256)   <vcpu>2</vcpu>
	I0729 18:46:30.683201   84251 main.go:141] libmachine: (newest-cni-903256)   <features>
	I0729 18:46:30.683208   84251 main.go:141] libmachine: (newest-cni-903256)     <acpi/>
	I0729 18:46:30.683213   84251 main.go:141] libmachine: (newest-cni-903256)     <apic/>
	I0729 18:46:30.683225   84251 main.go:141] libmachine: (newest-cni-903256)     <pae/>
	I0729 18:46:30.683232   84251 main.go:141] libmachine: (newest-cni-903256)     
	I0729 18:46:30.683238   84251 main.go:141] libmachine: (newest-cni-903256)   </features>
	I0729 18:46:30.683245   84251 main.go:141] libmachine: (newest-cni-903256)   <cpu mode='host-passthrough'>
	I0729 18:46:30.683250   84251 main.go:141] libmachine: (newest-cni-903256)   
	I0729 18:46:30.683256   84251 main.go:141] libmachine: (newest-cni-903256)   </cpu>
	I0729 18:46:30.683261   84251 main.go:141] libmachine: (newest-cni-903256)   <os>
	I0729 18:46:30.683268   84251 main.go:141] libmachine: (newest-cni-903256)     <type>hvm</type>
	I0729 18:46:30.683299   84251 main.go:141] libmachine: (newest-cni-903256)     <boot dev='cdrom'/>
	I0729 18:46:30.683323   84251 main.go:141] libmachine: (newest-cni-903256)     <boot dev='hd'/>
	I0729 18:46:30.683348   84251 main.go:141] libmachine: (newest-cni-903256)     <bootmenu enable='no'/>
	I0729 18:46:30.683366   84251 main.go:141] libmachine: (newest-cni-903256)   </os>
	I0729 18:46:30.683379   84251 main.go:141] libmachine: (newest-cni-903256)   <devices>
	I0729 18:46:30.683391   84251 main.go:141] libmachine: (newest-cni-903256)     <disk type='file' device='cdrom'>
	I0729 18:46:30.683419   84251 main.go:141] libmachine: (newest-cni-903256)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/boot2docker.iso'/>
	I0729 18:46:30.683431   84251 main.go:141] libmachine: (newest-cni-903256)       <target dev='hdc' bus='scsi'/>
	I0729 18:46:30.683445   84251 main.go:141] libmachine: (newest-cni-903256)       <readonly/>
	I0729 18:46:30.683458   84251 main.go:141] libmachine: (newest-cni-903256)     </disk>
	I0729 18:46:30.683472   84251 main.go:141] libmachine: (newest-cni-903256)     <disk type='file' device='disk'>
	I0729 18:46:30.683484   84251 main.go:141] libmachine: (newest-cni-903256)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:46:30.683500   84251 main.go:141] libmachine: (newest-cni-903256)       <source file='/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/newest-cni-903256.rawdisk'/>
	I0729 18:46:30.683510   84251 main.go:141] libmachine: (newest-cni-903256)       <target dev='hda' bus='virtio'/>
	I0729 18:46:30.683522   84251 main.go:141] libmachine: (newest-cni-903256)     </disk>
	I0729 18:46:30.683540   84251 main.go:141] libmachine: (newest-cni-903256)     <interface type='network'>
	I0729 18:46:30.683553   84251 main.go:141] libmachine: (newest-cni-903256)       <source network='mk-newest-cni-903256'/>
	I0729 18:46:30.683564   84251 main.go:141] libmachine: (newest-cni-903256)       <model type='virtio'/>
	I0729 18:46:30.683573   84251 main.go:141] libmachine: (newest-cni-903256)     </interface>
	I0729 18:46:30.683583   84251 main.go:141] libmachine: (newest-cni-903256)     <interface type='network'>
	I0729 18:46:30.683597   84251 main.go:141] libmachine: (newest-cni-903256)       <source network='default'/>
	I0729 18:46:30.683611   84251 main.go:141] libmachine: (newest-cni-903256)       <model type='virtio'/>
	I0729 18:46:30.683623   84251 main.go:141] libmachine: (newest-cni-903256)     </interface>
	I0729 18:46:30.683631   84251 main.go:141] libmachine: (newest-cni-903256)     <serial type='pty'>
	I0729 18:46:30.683642   84251 main.go:141] libmachine: (newest-cni-903256)       <target port='0'/>
	I0729 18:46:30.683652   84251 main.go:141] libmachine: (newest-cni-903256)     </serial>
	I0729 18:46:30.683661   84251 main.go:141] libmachine: (newest-cni-903256)     <console type='pty'>
	I0729 18:46:30.683678   84251 main.go:141] libmachine: (newest-cni-903256)       <target type='serial' port='0'/>
	I0729 18:46:30.683694   84251 main.go:141] libmachine: (newest-cni-903256)     </console>
	I0729 18:46:30.683703   84251 main.go:141] libmachine: (newest-cni-903256)     <rng model='virtio'>
	I0729 18:46:30.683712   84251 main.go:141] libmachine: (newest-cni-903256)       <backend model='random'>/dev/random</backend>
	I0729 18:46:30.683720   84251 main.go:141] libmachine: (newest-cni-903256)     </rng>
	I0729 18:46:30.683725   84251 main.go:141] libmachine: (newest-cni-903256)     
	I0729 18:46:30.683733   84251 main.go:141] libmachine: (newest-cni-903256)     
	I0729 18:46:30.683744   84251 main.go:141] libmachine: (newest-cni-903256)   </devices>
	I0729 18:46:30.683752   84251 main.go:141] libmachine: (newest-cni-903256) </domain>
	I0729 18:46:30.683764   84251 main.go:141] libmachine: (newest-cni-903256) 
	I0729 18:46:30.688085   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:18:c2:da in network default
	I0729 18:46:30.688651   84251 main.go:141] libmachine: (newest-cni-903256) Ensuring networks are active...
	I0729 18:46:30.688672   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:30.689327   84251 main.go:141] libmachine: (newest-cni-903256) Ensuring network default is active
	I0729 18:46:30.689607   84251 main.go:141] libmachine: (newest-cni-903256) Ensuring network mk-newest-cni-903256 is active
	I0729 18:46:30.690045   84251 main.go:141] libmachine: (newest-cni-903256) Getting domain xml...
	I0729 18:46:30.690701   84251 main.go:141] libmachine: (newest-cni-903256) Creating domain...
	I0729 18:46:31.952214   84251 main.go:141] libmachine: (newest-cni-903256) Waiting to get IP...
	I0729 18:46:31.952988   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:31.953420   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:31.953439   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:31.953396   84274 retry.go:31] will retry after 261.751977ms: waiting for machine to come up
	I0729 18:46:32.216957   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:32.217417   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:32.217447   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:32.217369   84274 retry.go:31] will retry after 270.043866ms: waiting for machine to come up
	I0729 18:46:32.488853   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:32.489379   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:32.489412   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:32.489350   84274 retry.go:31] will retry after 335.253907ms: waiting for machine to come up
	I0729 18:46:32.825833   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:32.826289   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:32.826320   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:32.826219   84274 retry.go:31] will retry after 496.757412ms: waiting for machine to come up
	I0729 18:46:33.324528   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:33.324974   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:33.325002   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:33.324930   84274 retry.go:31] will retry after 672.944303ms: waiting for machine to come up
	I0729 18:46:34.000034   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:34.000523   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:34.000571   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:34.000490   84274 retry.go:31] will retry after 913.112646ms: waiting for machine to come up
	I0729 18:46:34.915564   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:34.915967   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:34.916005   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:34.915926   84274 retry.go:31] will retry after 766.485053ms: waiting for machine to come up
	I0729 18:46:35.684510   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:35.684979   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:35.685007   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:35.684927   84274 retry.go:31] will retry after 1.236100877s: waiting for machine to come up
	I0729 18:46:36.922270   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:36.922772   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:36.922810   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:36.922740   84274 retry.go:31] will retry after 1.142869002s: waiting for machine to come up
	I0729 18:46:38.067357   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:38.067751   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:38.067778   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:38.067704   84274 retry.go:31] will retry after 1.58112412s: waiting for machine to come up
	I0729 18:46:39.651433   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:39.651908   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:39.651945   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:39.651864   84274 retry.go:31] will retry after 2.06459354s: waiting for machine to come up
	I0729 18:46:41.717755   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:41.718270   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:41.718297   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:41.718234   84274 retry.go:31] will retry after 3.310077087s: waiting for machine to come up
	I0729 18:46:45.031346   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:45.031768   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:45.031796   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:45.031729   84274 retry.go:31] will retry after 3.683065397s: waiting for machine to come up
	I0729 18:46:48.718353   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:48.718789   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find current IP address of domain newest-cni-903256 in network mk-newest-cni-903256
	I0729 18:46:48.718837   84251 main.go:141] libmachine: (newest-cni-903256) DBG | I0729 18:46:48.718722   84274 retry.go:31] will retry after 4.039703001s: waiting for machine to come up
	I0729 18:46:52.761590   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:52.762066   84251 main.go:141] libmachine: (newest-cni-903256) Found IP for machine: 192.168.50.148
	I0729 18:46:52.762098   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has current primary IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:52.762108   84251 main.go:141] libmachine: (newest-cni-903256) Reserving static IP address...
	I0729 18:46:52.762525   84251 main.go:141] libmachine: (newest-cni-903256) DBG | unable to find host DHCP lease matching {name: "newest-cni-903256", mac: "52:54:00:b7:b1:4e", ip: "192.168.50.148"} in network mk-newest-cni-903256
	I0729 18:46:52.841783   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Getting to WaitForSSH function...
	I0729 18:46:52.841813   84251 main.go:141] libmachine: (newest-cni-903256) Reserved static IP address: 192.168.50.148
	I0729 18:46:52.841826   84251 main.go:141] libmachine: (newest-cni-903256) Waiting for SSH to be available...
	I0729 18:46:52.845161   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:52.845611   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:52.845641   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:52.845837   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Using SSH client type: external
	I0729 18:46:52.845863   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa (-rw-------)
	I0729 18:46:52.845901   84251 main.go:141] libmachine: (newest-cni-903256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:46:52.845917   84251 main.go:141] libmachine: (newest-cni-903256) DBG | About to run SSH command:
	I0729 18:46:52.845937   84251 main.go:141] libmachine: (newest-cni-903256) DBG | exit 0
	I0729 18:46:52.978416   84251 main.go:141] libmachine: (newest-cni-903256) DBG | SSH cmd err, output: <nil>: 
	I0729 18:46:52.978665   84251 main.go:141] libmachine: (newest-cni-903256) KVM machine creation complete!
	I0729 18:46:52.979018   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetConfigRaw
	I0729 18:46:52.979528   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:46:52.979739   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:46:52.979890   84251 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 18:46:52.979905   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetState
	I0729 18:46:52.981387   84251 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 18:46:52.981404   84251 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 18:46:52.981422   84251 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 18:46:52.981431   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:52.984217   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:52.984602   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:52.984621   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:52.984870   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:52.985043   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:52.985232   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:52.985369   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:52.985508   84251 main.go:141] libmachine: Using SSH client type: native
	I0729 18:46:52.985740   84251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:46:52.985755   84251 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 18:46:53.093943   84251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:46:53.093965   84251 main.go:141] libmachine: Detecting the provisioner...
	I0729 18:46:53.093973   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:53.097127   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.097563   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:53.097586   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.097793   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:53.097985   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.098175   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.098317   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:53.098531   84251 main.go:141] libmachine: Using SSH client type: native
	I0729 18:46:53.098688   84251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:46:53.098699   84251 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 18:46:53.208065   84251 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 18:46:53.208140   84251 main.go:141] libmachine: found compatible host: buildroot
	I0729 18:46:53.208150   84251 main.go:141] libmachine: Provisioning with buildroot...
	I0729 18:46:53.208158   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetMachineName
	I0729 18:46:53.208419   84251 buildroot.go:166] provisioning hostname "newest-cni-903256"
	I0729 18:46:53.208446   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetMachineName
	I0729 18:46:53.208656   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:53.211661   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.212055   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:53.212083   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.212277   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:53.212489   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.212670   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.212857   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:53.213022   84251 main.go:141] libmachine: Using SSH client type: native
	I0729 18:46:53.213256   84251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:46:53.213276   84251 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-903256 && echo "newest-cni-903256" | sudo tee /etc/hostname
	I0729 18:46:53.337791   84251 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-903256
	
	I0729 18:46:53.337830   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:53.340983   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.341377   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:53.341404   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.341580   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:53.341786   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.341952   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.342071   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:53.342232   84251 main.go:141] libmachine: Using SSH client type: native
	I0729 18:46:53.342466   84251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:46:53.342490   84251 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-903256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-903256/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-903256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:46:53.460242   84251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:46:53.460278   84251 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:46:53.460330   84251 buildroot.go:174] setting up certificates
	I0729 18:46:53.460348   84251 provision.go:84] configureAuth start
	I0729 18:46:53.460363   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetMachineName
	I0729 18:46:53.460672   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetIP
	I0729 18:46:53.463624   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.464048   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:53.464079   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.464212   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:53.466742   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.467182   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:53.467215   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.467392   84251 provision.go:143] copyHostCerts
	I0729 18:46:53.467449   84251 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:46:53.467462   84251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:46:53.467550   84251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:46:53.467682   84251 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:46:53.467694   84251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:46:53.467731   84251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:46:53.467823   84251 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:46:53.467833   84251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:46:53.467867   84251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:46:53.467947   84251 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.newest-cni-903256 san=[127.0.0.1 192.168.50.148 localhost minikube newest-cni-903256]
	I0729 18:46:53.590406   84251 provision.go:177] copyRemoteCerts
	I0729 18:46:53.590510   84251 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:46:53.590547   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:53.593210   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.593574   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:53.593602   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.593784   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:53.593986   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.594200   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:53.594376   84251 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa Username:docker}
	I0729 18:46:53.681486   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:46:53.709490   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 18:46:53.737477   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:46:53.764179   84251 provision.go:87] duration metric: took 303.817364ms to configureAuth
	I0729 18:46:53.764205   84251 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:46:53.764368   84251 config.go:182] Loaded profile config "newest-cni-903256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:46:53.764428   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:53.767476   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.767902   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:53.767945   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:53.768173   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:53.768384   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.768551   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:53.768718   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:53.768894   84251 main.go:141] libmachine: Using SSH client type: native
	I0729 18:46:53.769112   84251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:46:53.769134   84251 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:46:54.050199   84251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:46:54.050232   84251 main.go:141] libmachine: Checking connection to Docker...
	I0729 18:46:54.050243   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetURL
	I0729 18:46:54.051782   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Using libvirt version 6000000
	I0729 18:46:54.054540   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.054892   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:54.054923   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.055102   84251 main.go:141] libmachine: Docker is up and running!
	I0729 18:46:54.055122   84251 main.go:141] libmachine: Reticulating splines...
	I0729 18:46:54.055129   84251 client.go:171] duration metric: took 23.894829086s to LocalClient.Create
	I0729 18:46:54.055162   84251 start.go:167] duration metric: took 23.894902783s to libmachine.API.Create "newest-cni-903256"
	I0729 18:46:54.055173   84251 start.go:293] postStartSetup for "newest-cni-903256" (driver="kvm2")
	I0729 18:46:54.055184   84251 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:46:54.055199   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:46:54.055481   84251 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:46:54.055508   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:54.057713   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.058095   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:54.058116   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.058273   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:54.058567   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:54.058723   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:54.058905   84251 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa Username:docker}
	I0729 18:46:54.145599   84251 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:46:54.150008   84251 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:46:54.150036   84251 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:46:54.150104   84251 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:46:54.150210   84251 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:46:54.150336   84251 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:46:54.160349   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:46:54.186202   84251 start.go:296] duration metric: took 131.017523ms for postStartSetup
	I0729 18:46:54.186252   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetConfigRaw
	I0729 18:46:54.186877   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetIP
	I0729 18:46:54.190687   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.191094   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:54.191123   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.191371   84251 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/config.json ...
	I0729 18:46:54.191580   84251 start.go:128] duration metric: took 24.049117711s to createHost
	I0729 18:46:54.191604   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:54.193915   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.194380   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:54.194410   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.194554   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:54.194734   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:54.194850   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:54.194957   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:54.195072   84251 main.go:141] libmachine: Using SSH client type: native
	I0729 18:46:54.195283   84251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I0729 18:46:54.195300   84251 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:46:54.307305   84251 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722278814.272489621
	
	I0729 18:46:54.307331   84251 fix.go:216] guest clock: 1722278814.272489621
	I0729 18:46:54.307340   84251 fix.go:229] Guest: 2024-07-29 18:46:54.272489621 +0000 UTC Remote: 2024-07-29 18:46:54.191592989 +0000 UTC m=+24.157710013 (delta=80.896632ms)
	I0729 18:46:54.307364   84251 fix.go:200] guest clock delta is within tolerance: 80.896632ms
	I0729 18:46:54.307370   84251 start.go:83] releasing machines lock for "newest-cni-903256", held for 24.165034232s
	I0729 18:46:54.307392   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:46:54.307689   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetIP
	I0729 18:46:54.310475   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.310823   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:54.310852   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.311090   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:46:54.311579   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:46:54.311741   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:46:54.311852   84251 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:46:54.311902   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:54.311953   84251 ssh_runner.go:195] Run: cat /version.json
	I0729 18:46:54.311970   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:46:54.314787   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.314919   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.315182   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:54.315227   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.315310   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:54.315311   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:54.315331   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:54.315497   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:46:54.315504   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:54.315666   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:46:54.315666   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:54.315846   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:46:54.315857   84251 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa Username:docker}
	I0729 18:46:54.315981   84251 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa Username:docker}
	I0729 18:46:54.418629   84251 ssh_runner.go:195] Run: systemctl --version
	I0729 18:46:54.424974   84251 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:46:54.589829   84251 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:46:54.596241   84251 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:46:54.596299   84251 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:46:54.613625   84251 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:46:54.613650   84251 start.go:495] detecting cgroup driver to use...
	I0729 18:46:54.613705   84251 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:46:54.630574   84251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:46:54.644112   84251 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:46:54.644186   84251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:46:54.658500   84251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:46:54.672343   84251 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:46:54.797851   84251 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:46:54.947859   84251 docker.go:233] disabling docker service ...
	I0729 18:46:54.947930   84251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:46:54.962507   84251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:46:54.977006   84251 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:46:55.120883   84251 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:46:55.262063   84251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:46:55.277746   84251 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:46:55.296751   84251 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 18:46:55.296842   84251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:46:55.307376   84251 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:46:55.307443   84251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:46:55.318813   84251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:46:55.331396   84251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:46:55.342410   84251 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:46:55.353158   84251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:46:55.363902   84251 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:46:55.382202   84251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:46:55.394506   84251 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:46:55.404196   84251 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:46:55.404238   84251 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:46:55.418305   84251 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:46:55.428781   84251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:46:55.554744   84251 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:46:55.694512   84251 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:46:55.694569   84251 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:46:55.699802   84251 start.go:563] Will wait 60s for crictl version
	I0729 18:46:55.699864   84251 ssh_runner.go:195] Run: which crictl
	I0729 18:46:55.703876   84251 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:46:55.747949   84251 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:46:55.748032   84251 ssh_runner.go:195] Run: crio --version
	I0729 18:46:55.777800   84251 ssh_runner.go:195] Run: crio --version
	I0729 18:46:55.807768   84251 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 18:46:55.809543   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetIP
	I0729 18:46:55.812449   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:55.812807   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:46:55.812834   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:46:55.813071   84251 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 18:46:55.817841   84251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:46:55.831890   84251 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0729 18:46:55.833297   84251 kubeadm.go:883] updating cluster {Name:newest-cni-903256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-903256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:46:55.833427   84251 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 18:46:55.833484   84251 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:46:55.873477   84251 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 18:46:55.873545   84251 ssh_runner.go:195] Run: which lz4
	I0729 18:46:55.877912   84251 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:46:55.882546   84251 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:46:55.882571   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0729 18:46:57.257976   84251 crio.go:462] duration metric: took 1.380106283s to copy over tarball
	I0729 18:46:57.258047   84251 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:46:59.324700   84251 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.066624097s)
	I0729 18:46:59.324730   84251 crio.go:469] duration metric: took 2.066726704s to extract the tarball
	I0729 18:46:59.324739   84251 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:46:59.361113   84251 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:46:59.404541   84251 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:46:59.404562   84251 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:46:59.404572   84251 kubeadm.go:934] updating node { 192.168.50.148 8443 v1.31.0-beta.0 crio true true} ...
	I0729 18:46:59.404686   84251 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-903256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-903256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:46:59.404769   84251 ssh_runner.go:195] Run: crio config
	I0729 18:46:59.456219   84251 cni.go:84] Creating CNI manager for ""
	I0729 18:46:59.456246   84251 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:46:59.456257   84251 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0729 18:46:59.456287   84251 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.148 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-903256 NodeName:newest-cni-903256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.50.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:46:59.456407   84251 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.148
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-903256"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:46:59.456459   84251 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 18:46:59.466666   84251 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:46:59.466723   84251 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:46:59.476108   84251 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0729 18:46:59.494182   84251 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 18:46:59.511503   84251 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0729 18:46:59.528305   84251 ssh_runner.go:195] Run: grep 192.168.50.148	control-plane.minikube.internal$ /etc/hosts
	I0729 18:46:59.532297   84251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:46:59.546812   84251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:46:59.689068   84251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:46:59.707507   84251 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256 for IP: 192.168.50.148
	I0729 18:46:59.707535   84251 certs.go:194] generating shared ca certs ...
	I0729 18:46:59.707556   84251 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:59.707727   84251 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:46:59.707783   84251 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:46:59.707796   84251 certs.go:256] generating profile certs ...
	I0729 18:46:59.707869   84251 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/client.key
	I0729 18:46:59.707886   84251 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/client.crt with IP's: []
	I0729 18:46:59.923871   84251 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/client.crt ...
	I0729 18:46:59.923898   84251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/client.crt: {Name:mk13f9cbc8097dabb8f92c284c8b5b040e870526 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:59.924062   84251 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/client.key ...
	I0729 18:46:59.924072   84251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/client.key: {Name:mkfa20e8be59f28104c177b931c1698c76d724d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:59.924167   84251 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.key.fd2e148c
	I0729 18:46:59.924181   84251 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.crt.fd2e148c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.148]
	I0729 18:46:59.968262   84251 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.crt.fd2e148c ...
	I0729 18:46:59.968289   84251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.crt.fd2e148c: {Name:mk93e8566c6330eb96edb4f77599931071304f61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:59.968436   84251 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.key.fd2e148c ...
	I0729 18:46:59.968447   84251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.key.fd2e148c: {Name:mk72789f9d7dea0d484a222371622b87d0e129d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:59.968517   84251 certs.go:381] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.crt.fd2e148c -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.crt
	I0729 18:46:59.968596   84251 certs.go:385] copying /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.key.fd2e148c -> /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.key
	I0729 18:46:59.968653   84251 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.key
	I0729 18:46:59.968667   84251 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.crt with IP's: []
	I0729 18:47:00.187389   84251 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.crt ...
	I0729 18:47:00.187416   84251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.crt: {Name:mk35c61426a95294573e08ec36968938eb77df8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:47:00.187598   84251 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.key ...
	I0729 18:47:00.187616   84251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.key: {Name:mkd983dc814956bf3aa671390bb3b5df191d2cfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:47:00.187841   84251 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:47:00.187893   84251 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:47:00.187907   84251 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:47:00.187948   84251 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:47:00.187982   84251 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:47:00.188014   84251 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:47:00.188067   84251 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:47:00.188630   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:47:00.215261   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:47:00.240262   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:47:00.264244   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:47:00.289024   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 18:47:00.315375   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:47:00.341518   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:47:00.367069   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/newest-cni-903256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 18:47:00.392914   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:47:00.418428   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:47:00.444113   84251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:47:00.469085   84251 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:47:00.489671   84251 ssh_runner.go:195] Run: openssl version
	I0729 18:47:00.503876   84251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:47:00.518395   84251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:47:00.523520   84251 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:47:00.523570   84251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:47:00.533762   84251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:47:00.547387   84251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:47:00.559607   84251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:47:00.564806   84251 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:47:00.564867   84251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:47:00.570663   84251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:47:00.582224   84251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:47:00.593191   84251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:47:00.597963   84251 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:47:00.598014   84251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:47:00.604192   84251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:47:00.616489   84251 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:47:00.620696   84251 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 18:47:00.620747   84251 kubeadm.go:392] StartCluster: {Name:newest-cni-903256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-903256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:47:00.620835   84251 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:47:00.620877   84251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:47:00.664843   84251 cri.go:89] found id: ""
	I0729 18:47:00.664921   84251 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:47:00.676750   84251 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:47:00.692609   84251 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:47:00.704473   84251 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:47:00.704495   84251 kubeadm.go:157] found existing configuration files:
	
	I0729 18:47:00.704544   84251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:47:00.714471   84251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:47:00.714523   84251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:47:00.724327   84251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:47:00.733853   84251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:47:00.733918   84251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:47:00.745273   84251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:47:00.756473   84251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:47:00.756527   84251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:47:00.766834   84251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:47:00.776882   84251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:47:00.776946   84251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:47:00.787617   84251 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:47:00.908814   84251 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 18:47:00.908880   84251 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:47:01.042350   84251 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:47:01.042565   84251 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:47:01.042705   84251 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 18:47:01.055990   84251 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:47:01.068492   84251 out.go:204]   - Generating certificates and keys ...
	I0729 18:47:01.068633   84251 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:47:01.068786   84251 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:47:01.377003   84251 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 18:47:01.640875   84251 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 18:47:01.832596   84251 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 18:47:02.012884   84251 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 18:47:02.194005   84251 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 18:47:02.194438   84251 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-903256] and IPs [192.168.50.148 127.0.0.1 ::1]
	I0729 18:47:02.411043   84251 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 18:47:02.411652   84251 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-903256] and IPs [192.168.50.148 127.0.0.1 ::1]
	I0729 18:47:02.607561   84251 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 18:47:02.737179   84251 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 18:47:02.812883   84251 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 18:47:02.813206   84251 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:47:02.936209   84251 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:47:03.126551   84251 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:47:03.209779   84251 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:47:03.341381   84251 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:47:03.595883   84251 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:47:03.596658   84251 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:47:03.600803   84251 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:47:03.603419   84251 out.go:204]   - Booting up control plane ...
	I0729 18:47:03.603527   84251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:47:03.603624   84251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:47:03.603719   84251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:47:03.620562   84251 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:47:03.626338   84251 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:47:03.626409   84251 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:47:03.760921   84251 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:47:03.761040   84251 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:47:04.761578   84251 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001501919s
	I0729 18:47:04.761696   84251 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:47:09.765313   84251 kubeadm.go:310] [api-check] The API server is healthy after 5.004480532s
	I0729 18:47:09.789440   84251 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:47:09.833264   84251 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:47:09.867921   84251 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:47:09.868097   84251 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-903256 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:47:09.881128   84251 kubeadm.go:310] [bootstrap-token] Using token: t1908v.ayn7d9z9sdth0t8s
	I0729 18:47:09.882483   84251 out.go:204]   - Configuring RBAC rules ...
	I0729 18:47:09.882630   84251 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:47:09.891734   84251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:47:09.904043   84251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:47:09.907805   84251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:47:09.912748   84251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:47:09.926428   84251 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:47:10.170825   84251 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:47:10.625352   84251 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:47:11.169462   84251 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:47:11.170492   84251 kubeadm.go:310] 
	I0729 18:47:11.170573   84251 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:47:11.170584   84251 kubeadm.go:310] 
	I0729 18:47:11.170698   84251 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:47:11.170714   84251 kubeadm.go:310] 
	I0729 18:47:11.170736   84251 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:47:11.170813   84251 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:47:11.170901   84251 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:47:11.170910   84251 kubeadm.go:310] 
	I0729 18:47:11.170986   84251 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:47:11.170994   84251 kubeadm.go:310] 
	I0729 18:47:11.171064   84251 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:47:11.171085   84251 kubeadm.go:310] 
	I0729 18:47:11.171187   84251 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:47:11.171294   84251 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:47:11.171393   84251 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:47:11.171427   84251 kubeadm.go:310] 
	I0729 18:47:11.171549   84251 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:47:11.171650   84251 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:47:11.171660   84251 kubeadm.go:310] 
	I0729 18:47:11.171761   84251 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t1908v.ayn7d9z9sdth0t8s \
	I0729 18:47:11.171891   84251 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 18:47:11.171921   84251 kubeadm.go:310] 	--control-plane 
	I0729 18:47:11.171930   84251 kubeadm.go:310] 
	I0729 18:47:11.172065   84251 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:47:11.172076   84251 kubeadm.go:310] 
	I0729 18:47:11.172180   84251 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t1908v.ayn7d9z9sdth0t8s \
	I0729 18:47:11.172326   84251 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 18:47:11.173205   84251 kubeadm.go:310] W0729 18:47:00.875986     845 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 18:47:11.173499   84251 kubeadm.go:310] W0729 18:47:00.877121     845 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 18:47:11.173649   84251 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:47:11.173681   84251 cni.go:84] Creating CNI manager for ""
	I0729 18:47:11.173693   84251 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:47:11.175532   84251 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:47:11.177124   84251 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:47:11.188139   84251 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:47:11.207839   84251 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:47:11.207962   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:11.207980   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-903256 minikube.k8s.io/updated_at=2024_07_29T18_47_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=newest-cni-903256 minikube.k8s.io/primary=true
	I0729 18:47:11.437645   84251 ops.go:34] apiserver oom_adj: -16
	I0729 18:47:11.437808   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:11.938709   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:12.438402   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:12.937973   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:13.438499   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:13.938485   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:14.437929   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:14.938783   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:15.438557   84251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:47:15.571843   84251 kubeadm.go:1113] duration metric: took 4.363925544s to wait for elevateKubeSystemPrivileges
	I0729 18:47:15.571892   84251 kubeadm.go:394] duration metric: took 14.951145529s to StartCluster
	I0729 18:47:15.571914   84251 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:47:15.572001   84251 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:47:15.574767   84251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:47:15.575029   84251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 18:47:15.575087   84251 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:47:15.575163   84251 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:47:15.575252   84251 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-903256"
	I0729 18:47:15.575286   84251 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-903256"
	I0729 18:47:15.575298   84251 config.go:182] Loaded profile config "newest-cni-903256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:47:15.575349   84251 host.go:66] Checking if "newest-cni-903256" exists ...
	I0729 18:47:15.575348   84251 addons.go:69] Setting default-storageclass=true in profile "newest-cni-903256"
	I0729 18:47:15.575401   84251 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-903256"
	I0729 18:47:15.575790   84251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:47:15.575809   84251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:47:15.575821   84251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:47:15.575837   84251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:47:15.576657   84251 out.go:177] * Verifying Kubernetes components...
	I0729 18:47:15.577924   84251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:47:15.592493   84251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40269
	I0729 18:47:15.592495   84251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35325
	I0729 18:47:15.593167   84251 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:47:15.593402   84251 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:47:15.593924   84251 main.go:141] libmachine: Using API Version  1
	I0729 18:47:15.593945   84251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:47:15.594060   84251 main.go:141] libmachine: Using API Version  1
	I0729 18:47:15.594080   84251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:47:15.594287   84251 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:47:15.594422   84251 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:47:15.594547   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetState
	I0729 18:47:15.594898   84251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:47:15.594921   84251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:47:15.598935   84251 addons.go:234] Setting addon default-storageclass=true in "newest-cni-903256"
	I0729 18:47:15.598979   84251 host.go:66] Checking if "newest-cni-903256" exists ...
	I0729 18:47:15.599344   84251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:47:15.599388   84251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:47:15.615129   84251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40453
	I0729 18:47:15.615641   84251 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:47:15.616206   84251 main.go:141] libmachine: Using API Version  1
	I0729 18:47:15.616231   84251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:47:15.616609   84251 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:47:15.616824   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetState
	I0729 18:47:15.616992   84251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
	I0729 18:47:15.617359   84251 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:47:15.617923   84251 main.go:141] libmachine: Using API Version  1
	I0729 18:47:15.617940   84251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:47:15.618449   84251 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:47:15.619034   84251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:47:15.619060   84251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:47:15.619282   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:47:15.621024   84251 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:47:15.622287   84251 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:47:15.622302   84251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:47:15.622316   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:47:15.625615   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:15.626080   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:15.626110   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:15.626417   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:47:15.626661   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:15.626917   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:47:15.627114   84251 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa Username:docker}
	I0729 18:47:15.635160   84251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0729 18:47:15.635829   84251 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:47:15.636470   84251 main.go:141] libmachine: Using API Version  1
	I0729 18:47:15.636491   84251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:47:15.636895   84251 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:47:15.637084   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetState
	I0729 18:47:15.638538   84251 main.go:141] libmachine: (newest-cni-903256) Calling .DriverName
	I0729 18:47:15.638758   84251 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:47:15.638773   84251 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:47:15.638789   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHHostname
	I0729 18:47:15.641560   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:15.641940   84251 main.go:141] libmachine: (newest-cni-903256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b1:4e", ip: ""} in network mk-newest-cni-903256: {Iface:virbr3 ExpiryTime:2024-07-29 19:46:45 +0000 UTC Type:0 Mac:52:54:00:b7:b1:4e Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:newest-cni-903256 Clientid:01:52:54:00:b7:b1:4e}
	I0729 18:47:15.641980   84251 main.go:141] libmachine: (newest-cni-903256) DBG | domain newest-cni-903256 has defined IP address 192.168.50.148 and MAC address 52:54:00:b7:b1:4e in network mk-newest-cni-903256
	I0729 18:47:15.642295   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHPort
	I0729 18:47:15.642505   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHKeyPath
	I0729 18:47:15.642654   84251 main.go:141] libmachine: (newest-cni-903256) Calling .GetSSHUsername
	I0729 18:47:15.642798   84251 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/newest-cni-903256/id_rsa Username:docker}
	I0729 18:47:15.782838   84251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 18:47:15.836149   84251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:47:16.054209   84251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:47:16.057974   84251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:47:16.466223   84251 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0729 18:47:16.466426   84251 main.go:141] libmachine: Making call to close driver server
	I0729 18:47:16.466466   84251 main.go:141] libmachine: (newest-cni-903256) Calling .Close
	I0729 18:47:16.466805   84251 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:47:16.466822   84251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:47:16.466822   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Closing plugin on server side
	I0729 18:47:16.466832   84251 main.go:141] libmachine: Making call to close driver server
	I0729 18:47:16.466841   84251 main.go:141] libmachine: (newest-cni-903256) Calling .Close
	I0729 18:47:16.467129   84251 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:47:16.467174   84251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:47:16.467171   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Closing plugin on server side
	I0729 18:47:16.468222   84251 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:47:16.468281   84251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:47:16.488188   84251 main.go:141] libmachine: Making call to close driver server
	I0729 18:47:16.488206   84251 main.go:141] libmachine: (newest-cni-903256) Calling .Close
	I0729 18:47:16.488571   84251 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:47:16.488589   84251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:47:16.488596   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Closing plugin on server side
	I0729 18:47:16.972086   84251 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-903256" context rescaled to 1 replicas
	I0729 18:47:17.158121   84251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.100080696s)
	I0729 18:47:17.158168   84251 api_server.go:72] duration metric: took 1.583043597s to wait for apiserver process to appear ...
	I0729 18:47:17.158190   84251 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:47:17.158212   84251 api_server.go:253] Checking apiserver healthz at https://192.168.50.148:8443/healthz ...
	I0729 18:47:17.158189   84251 main.go:141] libmachine: Making call to close driver server
	I0729 18:47:17.158296   84251 main.go:141] libmachine: (newest-cni-903256) Calling .Close
	I0729 18:47:17.158711   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Closing plugin on server side
	I0729 18:47:17.158753   84251 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:47:17.158762   84251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:47:17.158776   84251 main.go:141] libmachine: Making call to close driver server
	I0729 18:47:17.158784   84251 main.go:141] libmachine: (newest-cni-903256) Calling .Close
	I0729 18:47:17.159194   84251 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:47:17.159212   84251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:47:17.159220   84251 main.go:141] libmachine: (newest-cni-903256) DBG | Closing plugin on server side
	I0729 18:47:17.160884   84251 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0729 18:47:17.162070   84251 addons.go:510] duration metric: took 1.586906762s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0729 18:47:17.175977   84251 api_server.go:279] https://192.168.50.148:8443/healthz returned 200:
	ok
	I0729 18:47:17.179364   84251 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 18:47:17.179393   84251 api_server.go:131] duration metric: took 21.195203ms to wait for apiserver health ...
	I0729 18:47:17.179401   84251 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:47:17.202020   84251 system_pods.go:59] 8 kube-system pods found
	I0729 18:47:17.202103   84251 system_pods.go:61] "coredns-5cfdc65f69-p6wlm" [3052e7d5-bdfd-4118-b8b6-72945d493f25] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:47:17.202115   84251 system_pods.go:61] "coredns-5cfdc65f69-qtk95" [62593f19-4cb2-4f6e-8ddf-d375569d07e3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:47:17.202132   84251 system_pods.go:61] "etcd-newest-cni-903256" [4cd7ea62-6212-47df-9abb-5e48bcb73b28] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:47:17.202139   84251 system_pods.go:61] "kube-apiserver-newest-cni-903256" [ec79f34c-4b0a-46ee-9d78-7545e20b33d5] Running
	I0729 18:47:17.202152   84251 system_pods.go:61] "kube-controller-manager-newest-cni-903256" [548b9248-c25d-4ac7-bd55-0f20614a9640] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:47:17.202166   84251 system_pods.go:61] "kube-proxy-x7f5t" [658c7d91-ce7a-40d4-93d1-731444281915] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 18:47:17.202174   84251 system_pods.go:61] "kube-scheduler-newest-cni-903256" [4e642140-ca06-42e5-963c-87d8a5be6f53] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:47:17.202182   84251 system_pods.go:61] "storage-provisioner" [b5577ca0-0b55-4c46-ab7c-3f5da0a62c72] Pending
	I0729 18:47:17.202190   84251 system_pods.go:74] duration metric: took 22.783633ms to wait for pod list to return data ...
	I0729 18:47:17.202199   84251 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:47:17.214888   84251 default_sa.go:45] found service account: "default"
	I0729 18:47:17.214917   84251 default_sa.go:55] duration metric: took 12.71274ms for default service account to be created ...
	I0729 18:47:17.214929   84251 kubeadm.go:582] duration metric: took 1.639808682s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 18:47:17.214944   84251 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:47:17.222488   84251 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:47:17.222511   84251 node_conditions.go:123] node cpu capacity is 2
	I0729 18:47:17.222523   84251 node_conditions.go:105] duration metric: took 7.574074ms to run NodePressure ...
	I0729 18:47:17.222533   84251 start.go:241] waiting for startup goroutines ...
	I0729 18:47:17.222539   84251 start.go:246] waiting for cluster config update ...
	I0729 18:47:17.222549   84251 start.go:255] writing updated cluster config ...
	I0729 18:47:17.222860   84251 ssh_runner.go:195] Run: rm -f paused
	I0729 18:47:17.276577   84251 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 18:47:17.278711   84251 out.go:177] * Done! kubectl is now configured to use "newest-cni-903256" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.471868217Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278841471847929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c623b9f-d1eb-4b6b-96ba-5fb15868b836 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.472433528Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e98c04de-9b1f-40e5-8dea-966cca8d7901 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.472500920Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e98c04de-9b1f-40e5-8dea-966cca8d7901 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.472705721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:779e9739bfde18464512468df0e87f48c1c401d4ce273a6095af79033ffe2656,PodSandboxId:c92eac849c05a276450f5ed21c16280f037924bd6f261fc4fac83527ad034d67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277975185304948,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aacb67c-abea-47fb-a2f1-f1245e68599a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85701264cf72fe0d32d7f7107aafb2d5901645a6cafbbdef791511be37ccae55,PodSandboxId:afe9dc082f2fd7f1cbf73a448ac50816520049fb66834326f61b115559907037,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277974206890914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-j9ddw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679f8750-86aa-4e00-8291-6996b54b1930,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6857c552e13c1c01ae4c5d44e049fa2118b38a61c4aec37092311630f54fc67,PodSandboxId:005b4fcdc8b0cb00146953eadff1af1ab3bd974ac03e464760ebaaae6e094e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277974043899160,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bbh6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66
b43af3-78eb-437f-81d7-eedb4cc34349,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1774d6fcb55cec17aa29d4f0706d63871f6c0b47f54375c40db87b04b70742,PodSandboxId:4c608bb1fab59208b201b2829ef27301d6a92b4c385822079ead11d2d1f59c93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722277973250368443,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94ff9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd06899e-3d54-4b71-bda6-f8c6d06ce100,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d436b0a14a79af77c8f0c8cfe3de4fd0a11bdd340381691ffe45ce54fbe56f1,PodSandboxId:347441c86a6508adfde0f708f5cd0b9894be414137b73f1018e84e28c1bb8e38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722277962604229613,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c4a8e42fd4c8af4ba53f7fe0baa3b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a585ae36a26fed874f5cc3160be36d9a1efe57aaffbcb6d9be93da7f450b4c1,PodSandboxId:b01d9cf9e0e5b22f09c784ffd72f3bb05813a57653ac5ed726763865106b58a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722277962613018790,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fb17f47342c625216d5a613149e748,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e06f4bdecbf2629adb6db26a31717832f3ec841760329334040be495323ba8,PodSandboxId:70713d0f76b9c9553978014fcb78c07125601ddf5932e9f9956b56b3c1a7b13f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722277962580185978,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbe9d632c1637a08ae56bd9899dd403,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5f8d9c79b25a7fb9ee00be8049c0d1e607e78f6bc95d4340b6a3ffbfcf1dd3,PodSandboxId:1849372e07553a89feb0ce99c56ca232346c1d20d288c1d165910237adb69abc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722277962549420977,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f538f33d1fcf149f95291a1ac2f3fb29,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8521c4072867629f4deee425dc4bca36a97b2903ae0134a72ff6192cfc236dee,PodSandboxId:bcc0dc1963755939bacfbc748220969a0405ee97cf6e49e81b4247851fe33ea4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722277678156067131,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbe9d632c1637a08ae56bd9899dd403,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e98c04de-9b1f-40e5-8dea-966cca8d7901 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.511478490Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f68b3495-21a1-443f-930c-e149cbef163e name=/runtime.v1.RuntimeService/Version
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.511569819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f68b3495-21a1-443f-930c-e149cbef163e name=/runtime.v1.RuntimeService/Version
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.516289670Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7374b66f-975b-467b-92ae-ca149ddd07e4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.517001115Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278841516915099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7374b66f-975b-467b-92ae-ca149ddd07e4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.518038480Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34626579-274e-4ddf-868c-a384ef50133e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.518109446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34626579-274e-4ddf-868c-a384ef50133e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.518337031Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:779e9739bfde18464512468df0e87f48c1c401d4ce273a6095af79033ffe2656,PodSandboxId:c92eac849c05a276450f5ed21c16280f037924bd6f261fc4fac83527ad034d67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277975185304948,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aacb67c-abea-47fb-a2f1-f1245e68599a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85701264cf72fe0d32d7f7107aafb2d5901645a6cafbbdef791511be37ccae55,PodSandboxId:afe9dc082f2fd7f1cbf73a448ac50816520049fb66834326f61b115559907037,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277974206890914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-j9ddw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679f8750-86aa-4e00-8291-6996b54b1930,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6857c552e13c1c01ae4c5d44e049fa2118b38a61c4aec37092311630f54fc67,PodSandboxId:005b4fcdc8b0cb00146953eadff1af1ab3bd974ac03e464760ebaaae6e094e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277974043899160,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bbh6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66
b43af3-78eb-437f-81d7-eedb4cc34349,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1774d6fcb55cec17aa29d4f0706d63871f6c0b47f54375c40db87b04b70742,PodSandboxId:4c608bb1fab59208b201b2829ef27301d6a92b4c385822079ead11d2d1f59c93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722277973250368443,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94ff9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd06899e-3d54-4b71-bda6-f8c6d06ce100,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d436b0a14a79af77c8f0c8cfe3de4fd0a11bdd340381691ffe45ce54fbe56f1,PodSandboxId:347441c86a6508adfde0f708f5cd0b9894be414137b73f1018e84e28c1bb8e38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722277962604229613,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c4a8e42fd4c8af4ba53f7fe0baa3b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a585ae36a26fed874f5cc3160be36d9a1efe57aaffbcb6d9be93da7f450b4c1,PodSandboxId:b01d9cf9e0e5b22f09c784ffd72f3bb05813a57653ac5ed726763865106b58a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722277962613018790,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fb17f47342c625216d5a613149e748,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e06f4bdecbf2629adb6db26a31717832f3ec841760329334040be495323ba8,PodSandboxId:70713d0f76b9c9553978014fcb78c07125601ddf5932e9f9956b56b3c1a7b13f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722277962580185978,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbe9d632c1637a08ae56bd9899dd403,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5f8d9c79b25a7fb9ee00be8049c0d1e607e78f6bc95d4340b6a3ffbfcf1dd3,PodSandboxId:1849372e07553a89feb0ce99c56ca232346c1d20d288c1d165910237adb69abc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722277962549420977,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f538f33d1fcf149f95291a1ac2f3fb29,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8521c4072867629f4deee425dc4bca36a97b2903ae0134a72ff6192cfc236dee,PodSandboxId:bcc0dc1963755939bacfbc748220969a0405ee97cf6e49e81b4247851fe33ea4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722277678156067131,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbe9d632c1637a08ae56bd9899dd403,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=34626579-274e-4ddf-868c-a384ef50133e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.563435641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa688e81-a012-4e61-a96d-3caab12f654c name=/runtime.v1.RuntimeService/Version
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.563562511Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa688e81-a012-4e61-a96d-3caab12f654c name=/runtime.v1.RuntimeService/Version
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.564587206Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4165bd3a-7b96-4bd7-99b1-7e015df723ca name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.564927653Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278841564906752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4165bd3a-7b96-4bd7-99b1-7e015df723ca name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.565489665Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b5840f3-1219-4c18-813f-3a2f3bf700a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.565553900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b5840f3-1219-4c18-813f-3a2f3bf700a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.566480042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:779e9739bfde18464512468df0e87f48c1c401d4ce273a6095af79033ffe2656,PodSandboxId:c92eac849c05a276450f5ed21c16280f037924bd6f261fc4fac83527ad034d67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277975185304948,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aacb67c-abea-47fb-a2f1-f1245e68599a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85701264cf72fe0d32d7f7107aafb2d5901645a6cafbbdef791511be37ccae55,PodSandboxId:afe9dc082f2fd7f1cbf73a448ac50816520049fb66834326f61b115559907037,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277974206890914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-j9ddw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679f8750-86aa-4e00-8291-6996b54b1930,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6857c552e13c1c01ae4c5d44e049fa2118b38a61c4aec37092311630f54fc67,PodSandboxId:005b4fcdc8b0cb00146953eadff1af1ab3bd974ac03e464760ebaaae6e094e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277974043899160,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bbh6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66
b43af3-78eb-437f-81d7-eedb4cc34349,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1774d6fcb55cec17aa29d4f0706d63871f6c0b47f54375c40db87b04b70742,PodSandboxId:4c608bb1fab59208b201b2829ef27301d6a92b4c385822079ead11d2d1f59c93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722277973250368443,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94ff9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd06899e-3d54-4b71-bda6-f8c6d06ce100,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d436b0a14a79af77c8f0c8cfe3de4fd0a11bdd340381691ffe45ce54fbe56f1,PodSandboxId:347441c86a6508adfde0f708f5cd0b9894be414137b73f1018e84e28c1bb8e38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722277962604229613,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c4a8e42fd4c8af4ba53f7fe0baa3b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a585ae36a26fed874f5cc3160be36d9a1efe57aaffbcb6d9be93da7f450b4c1,PodSandboxId:b01d9cf9e0e5b22f09c784ffd72f3bb05813a57653ac5ed726763865106b58a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722277962613018790,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fb17f47342c625216d5a613149e748,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e06f4bdecbf2629adb6db26a31717832f3ec841760329334040be495323ba8,PodSandboxId:70713d0f76b9c9553978014fcb78c07125601ddf5932e9f9956b56b3c1a7b13f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722277962580185978,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbe9d632c1637a08ae56bd9899dd403,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5f8d9c79b25a7fb9ee00be8049c0d1e607e78f6bc95d4340b6a3ffbfcf1dd3,PodSandboxId:1849372e07553a89feb0ce99c56ca232346c1d20d288c1d165910237adb69abc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722277962549420977,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f538f33d1fcf149f95291a1ac2f3fb29,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8521c4072867629f4deee425dc4bca36a97b2903ae0134a72ff6192cfc236dee,PodSandboxId:bcc0dc1963755939bacfbc748220969a0405ee97cf6e49e81b4247851fe33ea4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722277678156067131,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbe9d632c1637a08ae56bd9899dd403,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b5840f3-1219-4c18-813f-3a2f3bf700a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.607329497Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d618f4f7-81e1-4f47-b354-a59bf76090e4 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.607427751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d618f4f7-81e1-4f47-b354-a59bf76090e4 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.610377138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ec3256e-d782-49cc-b6e4-13ca44d1fc03 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.610862083Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278841610836935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ec3256e-d782-49cc-b6e4-13ca44d1fc03 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.611376311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4d23121-81e0-4b1e-ba4d-1499f2a1a913 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.611454402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4d23121-81e0-4b1e-ba4d-1499f2a1a913 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:47:21 no-preload-888056 crio[736]: time="2024-07-29 18:47:21.611687330Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:779e9739bfde18464512468df0e87f48c1c401d4ce273a6095af79033ffe2656,PodSandboxId:c92eac849c05a276450f5ed21c16280f037924bd6f261fc4fac83527ad034d67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277975185304948,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aacb67c-abea-47fb-a2f1-f1245e68599a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85701264cf72fe0d32d7f7107aafb2d5901645a6cafbbdef791511be37ccae55,PodSandboxId:afe9dc082f2fd7f1cbf73a448ac50816520049fb66834326f61b115559907037,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277974206890914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-j9ddw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679f8750-86aa-4e00-8291-6996b54b1930,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6857c552e13c1c01ae4c5d44e049fa2118b38a61c4aec37092311630f54fc67,PodSandboxId:005b4fcdc8b0cb00146953eadff1af1ab3bd974ac03e464760ebaaae6e094e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277974043899160,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bbh6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66
b43af3-78eb-437f-81d7-eedb4cc34349,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1774d6fcb55cec17aa29d4f0706d63871f6c0b47f54375c40db87b04b70742,PodSandboxId:4c608bb1fab59208b201b2829ef27301d6a92b4c385822079ead11d2d1f59c93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722277973250368443,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94ff9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd06899e-3d54-4b71-bda6-f8c6d06ce100,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d436b0a14a79af77c8f0c8cfe3de4fd0a11bdd340381691ffe45ce54fbe56f1,PodSandboxId:347441c86a6508adfde0f708f5cd0b9894be414137b73f1018e84e28c1bb8e38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722277962604229613,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c4a8e42fd4c8af4ba53f7fe0baa3b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a585ae36a26fed874f5cc3160be36d9a1efe57aaffbcb6d9be93da7f450b4c1,PodSandboxId:b01d9cf9e0e5b22f09c784ffd72f3bb05813a57653ac5ed726763865106b58a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722277962613018790,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fb17f47342c625216d5a613149e748,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e06f4bdecbf2629adb6db26a31717832f3ec841760329334040be495323ba8,PodSandboxId:70713d0f76b9c9553978014fcb78c07125601ddf5932e9f9956b56b3c1a7b13f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722277962580185978,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbe9d632c1637a08ae56bd9899dd403,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5f8d9c79b25a7fb9ee00be8049c0d1e607e78f6bc95d4340b6a3ffbfcf1dd3,PodSandboxId:1849372e07553a89feb0ce99c56ca232346c1d20d288c1d165910237adb69abc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722277962549420977,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f538f33d1fcf149f95291a1ac2f3fb29,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8521c4072867629f4deee425dc4bca36a97b2903ae0134a72ff6192cfc236dee,PodSandboxId:bcc0dc1963755939bacfbc748220969a0405ee97cf6e49e81b4247851fe33ea4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722277678156067131,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-888056,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbe9d632c1637a08ae56bd9899dd403,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4d23121-81e0-4b1e-ba4d-1499f2a1a913 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	779e9739bfde1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   c92eac849c05a       storage-provisioner
	85701264cf72f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   afe9dc082f2fd       coredns-5cfdc65f69-j9ddw
	f6857c552e13c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   005b4fcdc8b0c       coredns-5cfdc65f69-bbh6c
	2b1774d6fcb55       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   14 minutes ago      Running             kube-proxy                0                   4c608bb1fab59       kube-proxy-94ff9
	2a585ae36a26f       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   14 minutes ago      Running             kube-controller-manager   2                   b01d9cf9e0e5b       kube-controller-manager-no-preload-888056
	7d436b0a14a79       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   14 minutes ago      Running             kube-scheduler            2                   347441c86a650       kube-scheduler-no-preload-888056
	f2e06f4bdecbf       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Running             kube-apiserver            2                   70713d0f76b9c       kube-apiserver-no-preload-888056
	5c5f8d9c79b25       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   14 minutes ago      Running             etcd                      2                   1849372e07553       etcd-no-preload-888056
	8521c40728676       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   19 minutes ago      Exited              kube-apiserver            1                   bcc0dc1963755       kube-apiserver-no-preload-888056
	
	
	==> coredns [85701264cf72fe0d32d7f7107aafb2d5901645a6cafbbdef791511be37ccae55] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f6857c552e13c1c01ae4c5d44e049fa2118b38a61c4aec37092311630f54fc67] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-888056
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-888056
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=no-preload-888056
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T18_32_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:32:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-888056
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:47:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:43:11 +0000   Mon, 29 Jul 2024 18:32:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:43:11 +0000   Mon, 29 Jul 2024 18:32:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:43:11 +0000   Mon, 29 Jul 2024 18:32:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:43:11 +0000   Mon, 29 Jul 2024 18:32:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.80
	  Hostname:    no-preload-888056
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 758826d64bb7478da0705a25a2608906
	  System UUID:                758826d6-4bb7-478d-a070-5a25a2608906
	  Boot ID:                    875ba7f7-9aaa-4f23-90f2-2198eefaec6c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-bbh6c                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-5cfdc65f69-j9ddw                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-888056                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-888056             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-888056    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-94ff9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-888056             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-78fcd8795b-9qqmj              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-888056 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-888056 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-888056 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-888056 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-888056 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-888056 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-888056 event: Registered Node no-preload-888056 in Controller
	
	
	==> dmesg <==
	[  +0.057652] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049776] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.258244] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.662198] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.603745] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.924650] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.061107] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061594] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.191247] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.149984] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[  +0.287943] systemd-fstab-generator[721]: Ignoring "noauto" option for root device
	[ +14.909600] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
	[  +0.064478] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.072496] systemd-fstab-generator[1310]: Ignoring "noauto" option for root device
	[Jul29 18:28] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.815521] kauditd_printk_skb: 93 callbacks suppressed
	[Jul29 18:32] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.556957] systemd-fstab-generator[2975]: Ignoring "noauto" option for root device
	[  +4.624635] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.467554] systemd-fstab-generator[3295]: Ignoring "noauto" option for root device
	[  +5.735400] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.209870] systemd-fstab-generator[3493]: Ignoring "noauto" option for root device
	[  +7.070286] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [5c5f8d9c79b25a7fb9ee00be8049c0d1e607e78f6bc95d4340b6a3ffbfcf1dd3] <==
	{"level":"info","ts":"2024-07-29T18:32:42.963002Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.72.80:2380"}
	{"level":"info","ts":"2024-07-29T18:32:42.963103Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:32:43.523547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83bc08ad82c569f4 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T18:32:43.523658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83bc08ad82c569f4 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T18:32:43.524151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83bc08ad82c569f4 received MsgPreVoteResp from 83bc08ad82c569f4 at term 1"}
	{"level":"info","ts":"2024-07-29T18:32:43.52421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83bc08ad82c569f4 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T18:32:43.524237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83bc08ad82c569f4 received MsgVoteResp from 83bc08ad82c569f4 at term 2"}
	{"level":"info","ts":"2024-07-29T18:32:43.524263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83bc08ad82c569f4 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T18:32:43.524289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 83bc08ad82c569f4 elected leader 83bc08ad82c569f4 at term 2"}
	{"level":"info","ts":"2024-07-29T18:32:43.529038Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:32:43.533132Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"83bc08ad82c569f4","local-member-attributes":"{Name:no-preload-888056 ClientURLs:[https://192.168.72.80:2379]}","request-path":"/0/members/83bc08ad82c569f4/attributes","cluster-id":"96c4a3ba39e20af4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:32:43.533257Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:32:43.533742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:32:43.533873Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"96c4a3ba39e20af4","local-member-id":"83bc08ad82c569f4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:32:43.534012Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:32:43.534034Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:32:43.536406Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T18:32:43.537197Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.80:2379"}
	{"level":"info","ts":"2024-07-29T18:32:43.539671Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T18:32:43.542748Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T18:32:43.551987Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T18:32:43.552052Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T18:42:43.623231Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":723}
	{"level":"info","ts":"2024-07-29T18:42:43.633181Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":723,"took":"9.021031ms","hash":56159094,"current-db-size-bytes":2392064,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2392064,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-29T18:42:43.633279Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":56159094,"revision":723,"compact-revision":-1}
	
	
	==> kernel <==
	 18:47:21 up 19 min,  0 users,  load average: 0.10, 0.15, 0.17
	Linux no-preload-888056 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8521c4072867629f4deee425dc4bca36a97b2903ae0134a72ff6192cfc236dee] <==
	W0729 18:32:38.258109       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.279825       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.295357       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.307470       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.330044       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.333238       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.354056       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.429463       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.429748       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.456393       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.544426       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.643830       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.673272       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.700717       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.720511       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.737346       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.743775       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.861367       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.878228       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:38.909339       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:39.024669       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:39.054100       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:39.154373       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:39.161848       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:32:39.246476       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f2e06f4bdecbf2629adb6db26a31717832f3ec841760329334040be495323ba8] <==
	E0729 18:42:46.110799       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0729 18:42:46.110692       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0729 18:42:46.112005       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 18:42:46.112041       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:43:46.112604       1 handler_proxy.go:99] no RequestInfo found in the context
	W0729 18:43:46.112665       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 18:43:46.112869       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0729 18:43:46.113046       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 18:43:46.114232       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 18:43:46.114280       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 18:45:46.115448       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 18:45:46.115556       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0729 18:45:46.115637       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 18:45:46.115714       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 18:45:46.116721       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 18:45:46.116787       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2a585ae36a26fed874f5cc3160be36d9a1efe57aaffbcb6d9be93da7f450b4c1] <==
	E0729 18:41:53.090410       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:41:53.266348       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:42:23.097824       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:42:23.275454       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:42:53.106425       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:42:53.284656       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 18:43:11.839089       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-888056"
	E0729 18:43:23.114099       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:43:23.295161       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:43:53.120862       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:43:53.303424       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 18:44:07.791542       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="271.139µs"
	I0729 18:44:18.793080       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="60.124µs"
	E0729 18:44:23.127767       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:44:23.311243       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:44:53.135766       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:44:53.320039       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:45:23.142913       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:45:23.328726       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:45:53.150483       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:45:53.339697       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:46:23.157072       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:46:23.347730       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 18:46:53.165011       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 18:46:53.357078       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [2b1774d6fcb55cec17aa29d4f0706d63871f6c0b47f54375c40db87b04b70742] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 18:32:53.698126       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 18:32:53.715699       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.72.80"]
	E0729 18:32:53.715777       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 18:32:53.834449       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 18:32:53.834623       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:32:53.834657       1 server_linux.go:170] "Using iptables Proxier"
	I0729 18:32:53.848671       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 18:32:53.849062       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 18:32:53.849092       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:32:53.850788       1 config.go:197] "Starting service config controller"
	I0729 18:32:53.850818       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:32:53.850841       1 config.go:104] "Starting endpoint slice config controller"
	I0729 18:32:53.850845       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:32:53.851486       1 config.go:326] "Starting node config controller"
	I0729 18:32:53.851493       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:32:53.952827       1 shared_informer.go:320] Caches are synced for node config
	I0729 18:32:53.952843       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:32:53.952854       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7d436b0a14a79af77c8f0c8cfe3de4fd0a11bdd340381691ffe45ce54fbe56f1] <==
	W0729 18:32:45.199647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 18:32:45.199666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.057714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 18:32:46.057766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.108859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 18:32:46.108911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.111008       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 18:32:46.111058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.132568       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 18:32:46.132619       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0729 18:32:46.180459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 18:32:46.181313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.212191       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 18:32:46.212306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.269831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 18:32:46.270065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.318062       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 18:32:46.318129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.360185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 18:32:46.360314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.361083       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 18:32:46.361124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 18:32:46.432112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 18:32:46.432166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0729 18:32:47.876889       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 18:44:47 no-preload-888056 kubelet[3302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:44:47 no-preload-888056 kubelet[3302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:44:47 no-preload-888056 kubelet[3302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:44:57 no-preload-888056 kubelet[3302]: E0729 18:44:57.772702    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:45:08 no-preload-888056 kubelet[3302]: E0729 18:45:08.770971    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:45:22 no-preload-888056 kubelet[3302]: E0729 18:45:22.769912    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:45:37 no-preload-888056 kubelet[3302]: E0729 18:45:37.771585    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:45:47 no-preload-888056 kubelet[3302]: E0729 18:45:47.831192    3302 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:45:47 no-preload-888056 kubelet[3302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:45:47 no-preload-888056 kubelet[3302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:45:47 no-preload-888056 kubelet[3302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:45:47 no-preload-888056 kubelet[3302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:45:49 no-preload-888056 kubelet[3302]: E0729 18:45:49.772054    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:46:02 no-preload-888056 kubelet[3302]: E0729 18:46:02.770394    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:46:15 no-preload-888056 kubelet[3302]: E0729 18:46:15.770587    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:46:28 no-preload-888056 kubelet[3302]: E0729 18:46:28.770735    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:46:42 no-preload-888056 kubelet[3302]: E0729 18:46:42.770706    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:46:47 no-preload-888056 kubelet[3302]: E0729 18:46:47.837671    3302 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:46:47 no-preload-888056 kubelet[3302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:46:47 no-preload-888056 kubelet[3302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:46:47 no-preload-888056 kubelet[3302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:46:47 no-preload-888056 kubelet[3302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:46:55 no-preload-888056 kubelet[3302]: E0729 18:46:55.770625    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:47:07 no-preload-888056 kubelet[3302]: E0729 18:47:07.782821    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	Jul 29 18:47:20 no-preload-888056 kubelet[3302]: E0729 18:47:20.770900    3302 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9qqmj" podUID="45bbbaf3-cf3e-4db1-9eec-693425bc5dff"
	
	
	==> storage-provisioner [779e9739bfde18464512468df0e87f48c1c401d4ce273a6095af79033ffe2656] <==
	I0729 18:32:55.292633       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 18:32:55.306882       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 18:32:55.307123       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 18:32:55.328313       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 18:32:55.328513       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-888056_c718b4ee-9753-4248-ac27-5b5bd211a006!
	I0729 18:32:55.330012       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"10aa09d0-90dc-4467-a48d-93fa86f2b19b", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-888056_c718b4ee-9753-4248-ac27-5b5bd211a006 became leader
	I0729 18:32:55.429556       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-888056_c718b4ee-9753-4248-ac27-5b5bd211a006!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-888056 -n no-preload-888056
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-888056 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-9qqmj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-888056 describe pod metrics-server-78fcd8795b-9qqmj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-888056 describe pod metrics-server-78fcd8795b-9qqmj: exit status 1 (59.784566ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-9qqmj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-888056 describe pod metrics-server-78fcd8795b-9qqmj: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (315.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (116.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:44:50.312259   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
E0729 18:45:57.219590   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.70:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.70:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386663 -n old-k8s-version-386663
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386663 -n old-k8s-version-386663: exit status 2 (228.26345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-386663" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-386663 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-386663 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.527µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-386663 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386663 -n old-k8s-version-386663
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386663 -n old-k8s-version-386663: exit status 2 (227.961308ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-386663 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-386663 logs -n 25: (1.524339951s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-729010 sudo cat                              | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo                                  | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo find                             | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-729010 sudo crio                             | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-729010                                       | bridge-729010                | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	| delete  | -p                                                     | disable-driver-mounts-603863 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:18 UTC |
	|         | disable-driver-mounts-603863                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:18 UTC | 29 Jul 24 18:19 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-888056             | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-888056                                   | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-409322            | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC | 29 Jul 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-502055  | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC |                     |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-386663        | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-888056                  | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-888056 --memory=2200                     | no-preload-888056            | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:33 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-409322                 | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-409322                                  | embed-certs-409322           | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-502055       | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-502055 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:31 UTC |
	|         | default-k8s-diff-port-502055                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386663             | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386663                              | old-k8s-version-386663       | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:22:47
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:22:47.218965   78080 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:22:47.219209   78080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:47.219217   78080 out.go:304] Setting ErrFile to fd 2...
	I0729 18:22:47.219222   78080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:47.219370   78080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:22:47.219863   78080 out.go:298] Setting JSON to false
	I0729 18:22:47.220726   78080 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7519,"bootTime":1722269848,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:22:47.220777   78080 start.go:139] virtualization: kvm guest
	I0729 18:22:47.222804   78080 out.go:177] * [old-k8s-version-386663] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:22:47.224119   78080 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 18:22:47.224173   78080 notify.go:220] Checking for updates...
	I0729 18:22:47.226449   78080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:22:47.227676   78080 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:22:47.228809   78080 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:22:47.229914   78080 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:22:47.230906   78080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:22:47.232363   78080 config.go:182] Loaded profile config "old-k8s-version-386663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:22:47.232750   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:47.232814   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:47.247542   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44723
	I0729 18:22:47.247909   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:47.248418   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:22:47.248436   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:47.248786   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:47.248965   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:22:47.250635   78080 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 18:22:47.251760   78080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:22:47.252055   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:47.252098   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:47.266291   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35843
	I0729 18:22:47.266672   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:47.267136   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:22:47.267157   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:47.267492   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:47.267662   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:22:47.303335   78080 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 18:22:47.304503   78080 start.go:297] selected driver: kvm2
	I0729 18:22:47.304513   78080 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:47.304607   78080 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:22:47.305291   78080 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:47.305360   78080 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:22:47.319918   78080 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:22:47.320315   78080 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:22:47.320341   78080 cni.go:84] Creating CNI manager for ""
	I0729 18:22:47.320349   78080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:22:47.320386   78080 start.go:340] cluster config:
	{Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:47.320480   78080 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:47.322357   78080 out.go:177] * Starting "old-k8s-version-386663" primary control-plane node in "old-k8s-version-386663" cluster
	I0729 18:22:43.378634   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:46.450644   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:47.323622   78080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:22:47.323653   78080 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:22:47.323660   78080 cache.go:56] Caching tarball of preloaded images
	I0729 18:22:47.323740   78080 preload.go:172] Found /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:22:47.323761   78080 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 18:22:47.323849   78080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json ...
	I0729 18:22:47.324021   78080 start.go:360] acquireMachinesLock for old-k8s-version-386663: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:22:52.530551   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:22:55.602731   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:01.682636   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:04.754621   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:10.834616   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:13.906688   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:19.986655   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:23.059064   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:29.138659   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:32.210758   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:38.290665   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:41.362732   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:47.442637   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:50.514656   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:56.594611   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:23:59.666706   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:05.746649   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:08.818685   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:14.898642   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:17.970619   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:24.050664   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:27.122664   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:33.202629   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:36.274678   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:42.354674   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:45.426704   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:51.506670   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:24:54.578602   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:00.658683   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:03.730663   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:09.810619   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:12.882598   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:18.962612   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:22.034673   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:28.114638   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:31.186598   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:37.266642   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:40.338599   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:46.418679   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:49.490705   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:55.570690   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:25:58.642719   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:04.722643   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:07.794711   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:13.874638   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:16.946806   77394 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.80:22: connect: no route to host
	I0729 18:26:19.951345   77627 start.go:364] duration metric: took 4m10.060086709s to acquireMachinesLock for "embed-certs-409322"
	I0729 18:26:19.951406   77627 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:19.951414   77627 fix.go:54] fixHost starting: 
	I0729 18:26:19.951732   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:19.951761   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:19.967602   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41827
	I0729 18:26:19.968062   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:19.968486   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:26:19.968505   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:19.968809   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:19.969009   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:19.969135   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:26:19.970757   77627 fix.go:112] recreateIfNeeded on embed-certs-409322: state=Stopped err=<nil>
	I0729 18:26:19.970784   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	W0729 18:26:19.970931   77627 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:19.972631   77627 out.go:177] * Restarting existing kvm2 VM for "embed-certs-409322" ...
	I0729 18:26:19.948656   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:19.948718   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:26:19.949066   77394 buildroot.go:166] provisioning hostname "no-preload-888056"
	I0729 18:26:19.949096   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:26:19.949286   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:26:19.951194   77394 machine.go:97] duration metric: took 4m37.435248922s to provisionDockerMachine
	I0729 18:26:19.951238   77394 fix.go:56] duration metric: took 4m37.45552986s for fixHost
	I0729 18:26:19.951246   77394 start.go:83] releasing machines lock for "no-preload-888056", held for 4m37.455571504s
	W0729 18:26:19.951284   77394 start.go:714] error starting host: provision: host is not running
	W0729 18:26:19.951381   77394 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 18:26:19.951389   77394 start.go:729] Will try again in 5 seconds ...
	I0729 18:26:19.973786   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Start
	I0729 18:26:19.973923   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring networks are active...
	I0729 18:26:19.974594   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring network default is active
	I0729 18:26:19.974930   77627 main.go:141] libmachine: (embed-certs-409322) Ensuring network mk-embed-certs-409322 is active
	I0729 18:26:19.975500   77627 main.go:141] libmachine: (embed-certs-409322) Getting domain xml...
	I0729 18:26:19.976135   77627 main.go:141] libmachine: (embed-certs-409322) Creating domain...
	I0729 18:26:21.186491   77627 main.go:141] libmachine: (embed-certs-409322) Waiting to get IP...
	I0729 18:26:21.187403   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.187857   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.187924   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.187843   78811 retry.go:31] will retry after 218.694883ms: waiting for machine to come up
	I0729 18:26:21.408404   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.408843   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.408872   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.408795   78811 retry.go:31] will retry after 335.138992ms: waiting for machine to come up
	I0729 18:26:21.745329   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:21.745805   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:21.745828   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:21.745759   78811 retry.go:31] will retry after 317.831297ms: waiting for machine to come up
	I0729 18:26:22.065446   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:22.065985   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:22.066024   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:22.065948   78811 retry.go:31] will retry after 557.945634ms: waiting for machine to come up
	I0729 18:26:22.625624   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:22.626020   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:22.626047   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:22.625967   78811 retry.go:31] will retry after 739.991425ms: waiting for machine to come up
	I0729 18:26:23.368166   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:23.368523   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:23.368549   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:23.368477   78811 retry.go:31] will retry after 878.16479ms: waiting for machine to come up
	I0729 18:26:24.248467   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:24.248871   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:24.248895   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:24.248813   78811 retry.go:31] will retry after 1.022542608s: waiting for machine to come up
	I0729 18:26:24.952911   77394 start.go:360] acquireMachinesLock for no-preload-888056: {Name:mke21c1c79cc7915e3f7595726f3952a8aaf5204 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:26:25.273470   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:25.273886   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:25.273913   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:25.273829   78811 retry.go:31] will retry after 1.313344307s: waiting for machine to come up
	I0729 18:26:26.589378   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:26.589805   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:26.589852   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:26.589769   78811 retry.go:31] will retry after 1.553795128s: waiting for machine to come up
	I0729 18:26:28.145271   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:28.145680   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:28.145704   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:28.145643   78811 retry.go:31] will retry after 1.859680601s: waiting for machine to come up
	I0729 18:26:30.007588   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:30.007988   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:30.008018   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:30.007937   78811 retry.go:31] will retry after 1.754805493s: waiting for machine to come up
	I0729 18:26:31.764527   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:31.765077   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:31.765107   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:31.765030   78811 retry.go:31] will retry after 2.769383357s: waiting for machine to come up
	I0729 18:26:34.536479   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:34.536972   77627 main.go:141] libmachine: (embed-certs-409322) DBG | unable to find current IP address of domain embed-certs-409322 in network mk-embed-certs-409322
	I0729 18:26:34.537007   77627 main.go:141] libmachine: (embed-certs-409322) DBG | I0729 18:26:34.536921   78811 retry.go:31] will retry after 3.355218512s: waiting for machine to come up
	I0729 18:26:39.563371   77859 start.go:364] duration metric: took 3m59.712120998s to acquireMachinesLock for "default-k8s-diff-port-502055"
	I0729 18:26:39.563440   77859 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:39.563452   77859 fix.go:54] fixHost starting: 
	I0729 18:26:39.563871   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:39.563914   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:39.580545   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0729 18:26:39.580962   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:39.581492   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:26:39.581518   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:39.581864   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:39.582096   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:39.582290   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:26:39.583857   77859 fix.go:112] recreateIfNeeded on default-k8s-diff-port-502055: state=Stopped err=<nil>
	I0729 18:26:39.583883   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	W0729 18:26:39.584062   77859 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:39.586281   77859 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-502055" ...
	I0729 18:26:39.587651   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Start
	I0729 18:26:39.587814   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring networks are active...
	I0729 18:26:39.588499   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring network default is active
	I0729 18:26:39.588864   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Ensuring network mk-default-k8s-diff-port-502055 is active
	I0729 18:26:39.589616   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Getting domain xml...
	I0729 18:26:39.590433   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Creating domain...
	I0729 18:26:37.896070   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.896640   77627 main.go:141] libmachine: (embed-certs-409322) Found IP for machine: 192.168.39.58
	I0729 18:26:37.896664   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has current primary IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.896670   77627 main.go:141] libmachine: (embed-certs-409322) Reserving static IP address...
	I0729 18:26:37.897129   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "embed-certs-409322", mac: "52:54:00:22:9f:57", ip: "192.168.39.58"} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:37.897157   77627 main.go:141] libmachine: (embed-certs-409322) Reserved static IP address: 192.168.39.58
	I0729 18:26:37.897173   77627 main.go:141] libmachine: (embed-certs-409322) DBG | skip adding static IP to network mk-embed-certs-409322 - found existing host DHCP lease matching {name: "embed-certs-409322", mac: "52:54:00:22:9f:57", ip: "192.168.39.58"}
	I0729 18:26:37.897189   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Getting to WaitForSSH function...
	I0729 18:26:37.897206   77627 main.go:141] libmachine: (embed-certs-409322) Waiting for SSH to be available...
	I0729 18:26:37.899216   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.899595   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:37.899616   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:37.899785   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Using SSH client type: external
	I0729 18:26:37.899808   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa (-rw-------)
	I0729 18:26:37.899845   77627 main.go:141] libmachine: (embed-certs-409322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:26:37.899858   77627 main.go:141] libmachine: (embed-certs-409322) DBG | About to run SSH command:
	I0729 18:26:37.899872   77627 main.go:141] libmachine: (embed-certs-409322) DBG | exit 0
	I0729 18:26:38.026619   77627 main.go:141] libmachine: (embed-certs-409322) DBG | SSH cmd err, output: <nil>: 
	I0729 18:26:38.027028   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetConfigRaw
	I0729 18:26:38.027621   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:38.030532   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.030963   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.030989   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.031243   77627 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/config.json ...
	I0729 18:26:38.031413   77627 machine.go:94] provisionDockerMachine start ...
	I0729 18:26:38.031437   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:38.031642   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.033867   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.034218   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.034251   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.034380   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.034545   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.034682   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.034807   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.034992   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.035175   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.035185   77627 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:26:38.142565   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:26:38.142595   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.142842   77627 buildroot.go:166] provisioning hostname "embed-certs-409322"
	I0729 18:26:38.142872   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.143071   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.145625   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.145951   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.145974   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.146217   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.146423   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.146577   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.146730   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.146861   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.147046   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.147065   77627 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-409322 && echo "embed-certs-409322" | sudo tee /etc/hostname
	I0729 18:26:38.264341   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-409322
	
	I0729 18:26:38.264368   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.266846   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.267144   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.267171   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.267328   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.267488   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.267660   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.267757   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.267936   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:38.268106   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:38.268122   77627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-409322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-409322/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-409322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:26:38.383748   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:38.383779   77627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:26:38.383805   77627 buildroot.go:174] setting up certificates
	I0729 18:26:38.383817   77627 provision.go:84] configureAuth start
	I0729 18:26:38.383827   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetMachineName
	I0729 18:26:38.384110   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:38.386936   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.387320   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.387348   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.387508   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.389550   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.389871   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.389910   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.389978   77627 provision.go:143] copyHostCerts
	I0729 18:26:38.390039   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:26:38.390052   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:26:38.390137   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:26:38.390257   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:26:38.390268   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:26:38.390308   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:26:38.390406   77627 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:26:38.390416   77627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:26:38.390456   77627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:26:38.390526   77627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.embed-certs-409322 san=[127.0.0.1 192.168.39.58 embed-certs-409322 localhost minikube]
	I0729 18:26:38.903674   77627 provision.go:177] copyRemoteCerts
	I0729 18:26:38.903758   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:26:38.903791   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:38.906662   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.906984   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:38.907018   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:38.907171   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:38.907360   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:38.907543   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:38.907667   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:38.992373   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 18:26:39.016465   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:26:39.039598   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:26:39.062415   77627 provision.go:87] duration metric: took 678.589364ms to configureAuth
	I0729 18:26:39.062443   77627 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:26:39.062622   77627 config.go:182] Loaded profile config "embed-certs-409322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:26:39.062696   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.065308   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.065703   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.065728   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.065902   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.066076   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.066244   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.066403   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.066553   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:39.066743   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:39.066759   77627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:26:39.326153   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:26:39.326176   77627 machine.go:97] duration metric: took 1.29475208s to provisionDockerMachine
	I0729 18:26:39.326186   77627 start.go:293] postStartSetup for "embed-certs-409322" (driver="kvm2")
	I0729 18:26:39.326195   77627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:26:39.326209   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.326603   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:26:39.326637   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.329049   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.329448   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.329476   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.329616   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.329822   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.330022   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.330186   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.413084   77627 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:26:39.417438   77627 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:26:39.417462   77627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:26:39.417535   77627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:26:39.417626   77627 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:26:39.417749   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:26:39.427256   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:39.451330   77627 start.go:296] duration metric: took 125.132889ms for postStartSetup
	I0729 18:26:39.451362   77627 fix.go:56] duration metric: took 19.499949606s for fixHost
	I0729 18:26:39.451380   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.453750   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.454047   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.454072   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.454237   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.454416   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.454570   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.454698   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.454864   77627 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:39.455069   77627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0729 18:26:39.455080   77627 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:26:39.563211   77627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277599.531173461
	
	I0729 18:26:39.563238   77627 fix.go:216] guest clock: 1722277599.531173461
	I0729 18:26:39.563248   77627 fix.go:229] Guest: 2024-07-29 18:26:39.531173461 +0000 UTC Remote: 2024-07-29 18:26:39.451365859 +0000 UTC m=+269.697720486 (delta=79.807602ms)
	I0729 18:26:39.563278   77627 fix.go:200] guest clock delta is within tolerance: 79.807602ms
	I0729 18:26:39.563287   77627 start.go:83] releasing machines lock for "embed-certs-409322", held for 19.611902888s
	I0729 18:26:39.563318   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.563562   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:39.566225   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.566549   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.566575   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.566766   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567227   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567378   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:26:39.567460   77627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:26:39.567501   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.567565   77627 ssh_runner.go:195] Run: cat /version.json
	I0729 18:26:39.567593   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:26:39.570113   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570330   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570536   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.570558   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570747   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:39.570754   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.570776   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:39.570883   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:26:39.571004   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.571113   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:26:39.571211   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.571330   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:26:39.571438   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.571478   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:26:39.651235   77627 ssh_runner.go:195] Run: systemctl --version
	I0729 18:26:39.677383   77627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:26:39.824036   77627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:26:39.830027   77627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:26:39.830103   77627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:26:39.845939   77627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:26:39.845963   77627 start.go:495] detecting cgroup driver to use...
	I0729 18:26:39.846019   77627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:26:39.862867   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:26:39.878060   77627 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:26:39.878152   77627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:26:39.892471   77627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:26:39.906690   77627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:26:40.039725   77627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:26:40.201419   77627 docker.go:233] disabling docker service ...
	I0729 18:26:40.201489   77627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:26:40.222454   77627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:26:40.237523   77627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:26:40.371463   77627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:26:40.499676   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:26:40.514068   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:26:40.534051   77627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:26:40.534114   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.545364   77627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:26:40.545458   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.557113   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.568215   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.579433   77627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:26:40.591005   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.601933   77627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.621097   77627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:26:40.631960   77627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:26:40.642308   77627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:26:40.642383   77627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:26:40.656469   77627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:26:40.671251   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:26:40.784289   77627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:26:40.933837   77627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:26:40.933910   77627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:26:40.939031   77627 start.go:563] Will wait 60s for crictl version
	I0729 18:26:40.939086   77627 ssh_runner.go:195] Run: which crictl
	I0729 18:26:40.943166   77627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:26:40.985673   77627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:26:40.985753   77627 ssh_runner.go:195] Run: crio --version
	I0729 18:26:41.013973   77627 ssh_runner.go:195] Run: crio --version
	I0729 18:26:41.046080   77627 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:26:40.822462   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting to get IP...
	I0729 18:26:40.823526   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:40.823948   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:40.824000   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:40.823920   78947 retry.go:31] will retry after 262.026124ms: waiting for machine to come up
	I0729 18:26:41.087492   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.087961   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.087991   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.087913   78947 retry.go:31] will retry after 380.066984ms: waiting for machine to come up
	I0729 18:26:41.469728   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.470215   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.470244   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.470181   78947 retry.go:31] will retry after 293.069239ms: waiting for machine to come up
	I0729 18:26:41.764797   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.765277   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:41.765303   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:41.765228   78947 retry.go:31] will retry after 491.247116ms: waiting for machine to come up
	I0729 18:26:42.257741   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.258247   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.258275   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:42.258220   78947 retry.go:31] will retry after 693.832082ms: waiting for machine to come up
	I0729 18:26:42.953375   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.954146   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:42.954169   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:42.954051   78947 retry.go:31] will retry after 710.005115ms: waiting for machine to come up
	I0729 18:26:43.666068   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:43.666478   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:43.666504   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:43.666438   78947 retry.go:31] will retry after 1.077324053s: waiting for machine to come up
	I0729 18:26:41.047322   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetIP
	I0729 18:26:41.049993   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:41.050394   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:26:41.050433   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:26:41.050630   77627 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:26:41.054805   77627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:26:41.066926   77627 kubeadm.go:883] updating cluster {Name:embed-certs-409322 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:26:41.067053   77627 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:26:41.067115   77627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:26:41.103417   77627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:26:41.103489   77627 ssh_runner.go:195] Run: which lz4
	I0729 18:26:41.107793   77627 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:26:41.112161   77627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:26:41.112192   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:26:42.559564   77627 crio.go:462] duration metric: took 1.451801292s to copy over tarball
	I0729 18:26:42.559679   77627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:26:44.759513   77627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199801336s)
	I0729 18:26:44.759543   77627 crio.go:469] duration metric: took 2.199942615s to extract the tarball
	I0729 18:26:44.759554   77627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:26:44.744984   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:44.745450   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:44.745477   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:44.745403   78947 retry.go:31] will retry after 1.064257005s: waiting for machine to come up
	I0729 18:26:45.811414   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:45.811840   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:45.811880   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:45.811799   78947 retry.go:31] will retry after 1.30236943s: waiting for machine to come up
	I0729 18:26:47.116252   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:47.116668   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:47.116728   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:47.116647   78947 retry.go:31] will retry after 1.424333691s: waiting for machine to come up
	I0729 18:26:48.543481   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:48.543945   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:48.543973   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:48.543894   78947 retry.go:31] will retry after 2.106061522s: waiting for machine to come up
	I0729 18:26:44.798609   77627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:26:44.848236   77627 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:26:44.848257   77627 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:26:44.848265   77627 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.30.3 crio true true} ...
	I0729 18:26:44.848355   77627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-409322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:26:44.848415   77627 ssh_runner.go:195] Run: crio config
	I0729 18:26:44.901558   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:26:44.901584   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:26:44.901597   77627 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:26:44.901625   77627 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-409322 NodeName:embed-certs-409322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:26:44.901807   77627 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-409322"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:26:44.901875   77627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:26:44.912290   77627 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:26:44.912351   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:26:44.921801   77627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 18:26:44.940473   77627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:26:44.958445   77627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 18:26:44.976890   77627 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I0729 18:26:44.980974   77627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:26:44.994793   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:26:45.120453   77627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:26:45.138398   77627 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322 for IP: 192.168.39.58
	I0729 18:26:45.138419   77627 certs.go:194] generating shared ca certs ...
	I0729 18:26:45.138438   77627 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:26:45.138592   77627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:26:45.138643   77627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:26:45.138657   77627 certs.go:256] generating profile certs ...
	I0729 18:26:45.138751   77627 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/client.key
	I0729 18:26:45.138823   77627 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.key.4af4a6b9
	I0729 18:26:45.138889   77627 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.key
	I0729 18:26:45.139034   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:26:45.139074   77627 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:26:45.139088   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:26:45.139122   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:26:45.139161   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:26:45.139200   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:26:45.139305   77627 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:45.139979   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:26:45.177194   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:26:45.206349   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:26:45.242291   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:26:45.277062   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 18:26:45.312447   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:26:45.345482   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:26:45.369151   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/embed-certs-409322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:26:45.394521   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:26:45.418579   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:26:45.443252   77627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:26:45.466770   77627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:26:45.484159   77627 ssh_runner.go:195] Run: openssl version
	I0729 18:26:45.490045   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:26:45.501166   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.505930   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.505988   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:26:45.511926   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:26:45.522860   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:26:45.533560   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.538411   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.538474   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:26:45.544485   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:26:45.555603   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:26:45.566407   77627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.570892   77627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.570944   77627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:26:45.576555   77627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:26:45.587780   77627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:26:45.592689   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:26:45.598981   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:26:45.604952   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:26:45.611225   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:26:45.617506   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:26:45.623744   77627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:26:45.629836   77627 kubeadm.go:392] StartCluster: {Name:embed-certs-409322 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-409322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:26:45.629947   77627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:26:45.630003   77627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:26:45.667768   77627 cri.go:89] found id: ""
	I0729 18:26:45.667853   77627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:26:45.678703   77627 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:26:45.678724   77627 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:26:45.678772   77627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:26:45.691979   77627 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:26:45.693237   77627 kubeconfig.go:125] found "embed-certs-409322" server: "https://192.168.39.58:8443"
	I0729 18:26:45.696093   77627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:26:45.708981   77627 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I0729 18:26:45.709017   77627 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:26:45.709030   77627 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:26:45.709088   77627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:26:45.748738   77627 cri.go:89] found id: ""
	I0729 18:26:45.748817   77627 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:26:45.775148   77627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:26:45.786631   77627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:26:45.786651   77627 kubeadm.go:157] found existing configuration files:
	
	I0729 18:26:45.786701   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:26:45.799453   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:26:45.799507   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:26:45.809691   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:26:45.819592   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:26:45.819638   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:26:45.832072   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:26:45.843769   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:26:45.843817   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:26:45.854649   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:26:45.863448   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:26:45.863504   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:26:45.872399   77627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:26:45.881992   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:46.012679   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.143076   77627 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.130359187s)
	I0729 18:26:47.143112   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.370854   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.446808   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:47.550087   77627 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:26:47.550191   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.050502   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.550499   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:26:48.608713   77627 api_server.go:72] duration metric: took 1.058625786s to wait for apiserver process to appear ...
	I0729 18:26:48.608745   77627 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:26:48.608773   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:51.829925   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:26:51.829963   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:26:51.829979   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:51.843474   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:26:51.843503   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:26:52.109882   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:52.117387   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:26:52.117415   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:26:52.608863   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:52.613809   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:26:52.613840   77627 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:26:53.109430   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:26:53.115353   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I0729 18:26:53.122373   77627 api_server.go:141] control plane version: v1.30.3
	I0729 18:26:53.122411   77627 api_server.go:131] duration metric: took 4.513658045s to wait for apiserver health ...
	I0729 18:26:53.122420   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:26:53.122426   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:26:53.123807   77627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:26:50.651329   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:50.651724   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:50.651753   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:50.651678   78947 retry.go:31] will retry after 3.358167933s: waiting for machine to come up
	I0729 18:26:54.014102   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:54.014543   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | unable to find current IP address of domain default-k8s-diff-port-502055 in network mk-default-k8s-diff-port-502055
	I0729 18:26:54.014576   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | I0729 18:26:54.014495   78947 retry.go:31] will retry after 4.372189125s: waiting for machine to come up
	I0729 18:26:53.124953   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:26:53.140970   77627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:26:53.179660   77627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:26:53.193885   77627 system_pods.go:59] 8 kube-system pods found
	I0729 18:26:53.193921   77627 system_pods.go:61] "coredns-7db6d8ff4d-vxvfc" [da2fd5a1-f57f-4374-99ee-9017e228176f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:26:53.193932   77627 system_pods.go:61] "etcd-embed-certs-409322" [3eca462f-6156-4858-a886-30d0d32faa35] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:26:53.193944   77627 system_pods.go:61] "kube-apiserver-embed-certs-409322" [4c6473c7-d7b8-4513-b800-7cab08748d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:26:53.193953   77627 system_pods.go:61] "kube-controller-manager-embed-certs-409322" [2dc47da0-3d24-49d8-91ae-13074468b423] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:26:53.193961   77627 system_pods.go:61] "kube-proxy-zf5jf" [a0b6fd82-d0b1-4821-a668-4cb6420b4860] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 18:26:53.193969   77627 system_pods.go:61] "kube-scheduler-embed-certs-409322" [ab422567-58e6-4f22-a7cf-391b35cc386c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:26:53.193977   77627 system_pods.go:61] "metrics-server-569cc877fc-flh27" [83d6c69c-200d-4ce2-80e9-b83ff5b6ebe9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:26:53.193989   77627 system_pods.go:61] "storage-provisioner" [73ff548f-26c3-4442-a9bd-bdac45261476] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 18:26:53.194002   77627 system_pods.go:74] duration metric: took 14.320361ms to wait for pod list to return data ...
	I0729 18:26:53.194014   77627 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:26:53.197826   77627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:26:53.197858   77627 node_conditions.go:123] node cpu capacity is 2
	I0729 18:26:53.197870   77627 node_conditions.go:105] duration metric: took 3.850077ms to run NodePressure ...
	I0729 18:26:53.197884   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:26:53.467868   77627 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:26:53.471886   77627 kubeadm.go:739] kubelet initialised
	I0729 18:26:53.471905   77627 kubeadm.go:740] duration metric: took 4.016417ms waiting for restarted kubelet to initialise ...
	I0729 18:26:53.471912   77627 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:26:53.476695   77627 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.480449   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.480481   77627 pod_ready.go:81] duration metric: took 3.766ms for pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.480491   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "coredns-7db6d8ff4d-vxvfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.480501   77627 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.484712   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "etcd-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.484739   77627 pod_ready.go:81] duration metric: took 4.228077ms for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.484750   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "etcd-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.484759   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:53.488510   77627 pod_ready.go:97] node "embed-certs-409322" hosting pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.488532   77627 pod_ready.go:81] duration metric: took 3.76371ms for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	E0729 18:26:53.488539   77627 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-409322" hosting pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-409322" has status "Ready":"False"
	I0729 18:26:53.488545   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:26:58.387940   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.388358   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Found IP for machine: 192.168.61.244
	I0729 18:26:58.388383   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has current primary IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.388396   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Reserving static IP address...
	I0729 18:26:58.388794   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-502055", mac: "52:54:00:ae:63:e1", ip: "192.168.61.244"} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.388826   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Reserved static IP address: 192.168.61.244
	I0729 18:26:58.388848   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | skip adding static IP to network mk-default-k8s-diff-port-502055 - found existing host DHCP lease matching {name: "default-k8s-diff-port-502055", mac: "52:54:00:ae:63:e1", ip: "192.168.61.244"}
	I0729 18:26:58.388873   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Waiting for SSH to be available...
	I0729 18:26:58.388894   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Getting to WaitForSSH function...
	I0729 18:26:58.390937   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.391281   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.391319   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.391381   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Using SSH client type: external
	I0729 18:26:58.391408   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa (-rw-------)
	I0729 18:26:58.391457   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:26:58.391490   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | About to run SSH command:
	I0729 18:26:58.391511   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | exit 0
	I0729 18:26:58.518399   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | SSH cmd err, output: <nil>: 
	I0729 18:26:58.518782   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetConfigRaw
	I0729 18:26:58.519492   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:58.522245   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.522580   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.522615   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.522862   77859 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/config.json ...
	I0729 18:26:58.523037   77859 machine.go:94] provisionDockerMachine start ...
	I0729 18:26:58.523053   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:58.523258   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.525654   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.525998   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.526018   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.526185   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.526351   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.526555   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.526705   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.526874   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.527066   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.527079   77859 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:26:58.635267   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:26:58.635302   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.635524   77859 buildroot.go:166] provisioning hostname "default-k8s-diff-port-502055"
	I0729 18:26:58.635550   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.635789   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.638770   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.639235   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.639265   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.639371   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.639564   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.639729   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.639865   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.640048   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.640227   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.640245   77859 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-502055 && echo "default-k8s-diff-port-502055" | sudo tee /etc/hostname
	I0729 18:26:58.760577   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-502055
	
	I0729 18:26:58.760603   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.763294   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.763591   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.763625   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.763766   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:58.763970   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.764159   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:58.764311   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:58.764480   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:58.764641   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:58.764659   77859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-502055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-502055/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-502055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:26:58.879366   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:26:58.879400   77859 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:26:58.879440   77859 buildroot.go:174] setting up certificates
	I0729 18:26:58.879451   77859 provision.go:84] configureAuth start
	I0729 18:26:58.879463   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetMachineName
	I0729 18:26:58.879735   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:58.882335   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.882652   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.882680   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.882848   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:58.885023   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.885313   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:58.885339   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:58.885433   77859 provision.go:143] copyHostCerts
	I0729 18:26:58.885479   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:26:58.885488   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:26:58.885544   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:26:58.885633   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:26:58.885641   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:26:58.885660   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:26:58.885709   77859 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:26:58.885716   77859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:26:58.885733   77859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:26:58.885783   77859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-502055 san=[127.0.0.1 192.168.61.244 default-k8s-diff-port-502055 localhost minikube]
	I0729 18:26:59.130657   77859 provision.go:177] copyRemoteCerts
	I0729 18:26:59.130724   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:26:59.130749   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.133536   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.133898   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.133922   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.134079   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.134260   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.134421   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.134530   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.216614   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 18:26:59.240540   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:26:59.267350   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:26:59.294003   77859 provision.go:87] duration metric: took 414.539559ms to configureAuth
	I0729 18:26:59.294032   77859 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:26:59.294222   77859 config.go:182] Loaded profile config "default-k8s-diff-port-502055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:26:59.294293   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.296911   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.297285   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.297311   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.297450   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.297656   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.297804   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.297935   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.298102   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:59.298265   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:59.298281   77859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:26:59.557084   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:26:59.557131   77859 machine.go:97] duration metric: took 1.034080964s to provisionDockerMachine
	I0729 18:26:59.557148   77859 start.go:293] postStartSetup for "default-k8s-diff-port-502055" (driver="kvm2")
	I0729 18:26:59.557165   77859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:26:59.557191   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.557496   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:26:59.557529   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.559962   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.560255   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.560276   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.560461   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.560635   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.560798   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.560953   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.645623   77859 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:26:59.650416   77859 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:26:59.650447   77859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:26:59.650531   77859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:26:59.650624   77859 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:26:59.650730   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:26:59.660864   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:26:59.685728   77859 start.go:296] duration metric: took 128.564534ms for postStartSetup
	I0729 18:26:59.685767   77859 fix.go:56] duration metric: took 20.122314731s for fixHost
	I0729 18:26:59.685791   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.688401   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.688773   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.688801   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.688978   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.689157   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.689293   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.689401   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.689551   77859 main.go:141] libmachine: Using SSH client type: native
	I0729 18:26:59.689712   77859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0729 18:26:59.689722   77859 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:26:55.494570   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:26:57.495784   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:26:59.799712   78080 start.go:364] duration metric: took 4m12.475660562s to acquireMachinesLock for "old-k8s-version-386663"
	I0729 18:26:59.799786   78080 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:26:59.799796   78080 fix.go:54] fixHost starting: 
	I0729 18:26:59.800184   78080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:26:59.800215   78080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:26:59.816885   78080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37963
	I0729 18:26:59.817336   78080 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:26:59.817822   78080 main.go:141] libmachine: Using API Version  1
	I0729 18:26:59.817851   78080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:26:59.818283   78080 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:26:59.818505   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:26:59.818671   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetState
	I0729 18:26:59.820232   78080 fix.go:112] recreateIfNeeded on old-k8s-version-386663: state=Stopped err=<nil>
	I0729 18:26:59.820254   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	W0729 18:26:59.820426   78080 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:26:59.822140   78080 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-386663" ...
	I0729 18:26:59.799573   77859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277619.755982716
	
	I0729 18:26:59.799603   77859 fix.go:216] guest clock: 1722277619.755982716
	I0729 18:26:59.799614   77859 fix.go:229] Guest: 2024-07-29 18:26:59.755982716 +0000 UTC Remote: 2024-07-29 18:26:59.685771603 +0000 UTC m=+259.980298680 (delta=70.211113ms)
	I0729 18:26:59.799637   77859 fix.go:200] guest clock delta is within tolerance: 70.211113ms
	I0729 18:26:59.799641   77859 start.go:83] releasing machines lock for "default-k8s-diff-port-502055", held for 20.236230068s
	I0729 18:26:59.799672   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.799944   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:26:59.802636   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.802983   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.803013   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.803248   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.803740   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.803927   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:26:59.804023   77859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:26:59.804070   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.804193   77859 ssh_runner.go:195] Run: cat /version.json
	I0729 18:26:59.804229   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:26:59.807037   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807117   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807395   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.807435   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807528   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.807547   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:26:59.807565   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:26:59.807708   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.807717   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:26:59.807910   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:26:59.807936   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.808043   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.808098   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:26:59.808244   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:26:59.920371   77859 ssh_runner.go:195] Run: systemctl --version
	I0729 18:26:59.926620   77859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:00.072161   77859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:00.079273   77859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:00.079340   77859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:00.096528   77859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:00.096550   77859 start.go:495] detecting cgroup driver to use...
	I0729 18:27:00.096610   77859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:00.113690   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:00.129058   77859 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:00.129126   77859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:00.143930   77859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:00.158085   77859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:00.296398   77859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:00.482313   77859 docker.go:233] disabling docker service ...
	I0729 18:27:00.482459   77859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:00.501504   77859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:00.520932   77859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:00.657805   77859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:00.792064   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:00.807790   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:00.827373   77859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:27:00.827423   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.838281   77859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:00.838340   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.849533   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.860820   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.872359   77859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:00.883904   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.895589   77859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.914639   77859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:00.926278   77859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:00.936329   77859 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:00.936383   77859 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:00.951219   77859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:00.966530   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:01.086665   77859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:01.233627   77859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:01.233703   77859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:01.241055   77859 start.go:563] Will wait 60s for crictl version
	I0729 18:27:01.241122   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:27:01.244875   77859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:01.284013   77859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:01.284103   77859 ssh_runner.go:195] Run: crio --version
	I0729 18:27:01.315493   77859 ssh_runner.go:195] Run: crio --version
	I0729 18:27:01.348781   77859 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:26:59.823421   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .Start
	I0729 18:26:59.823575   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring networks are active...
	I0729 18:26:59.824264   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring network default is active
	I0729 18:26:59.824641   78080 main.go:141] libmachine: (old-k8s-version-386663) Ensuring network mk-old-k8s-version-386663 is active
	I0729 18:26:59.825024   78080 main.go:141] libmachine: (old-k8s-version-386663) Getting domain xml...
	I0729 18:26:59.825885   78080 main.go:141] libmachine: (old-k8s-version-386663) Creating domain...
	I0729 18:27:01.104265   78080 main.go:141] libmachine: (old-k8s-version-386663) Waiting to get IP...
	I0729 18:27:01.105349   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.105790   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.105836   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.105761   79098 retry.go:31] will retry after 308.255094ms: waiting for machine to come up
	I0729 18:27:01.415431   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.415999   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.416030   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.415952   79098 retry.go:31] will retry after 236.525723ms: waiting for machine to come up
	I0729 18:27:01.654767   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.655279   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.655312   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.655247   79098 retry.go:31] will retry after 311.010394ms: waiting for machine to come up
	I0729 18:27:01.967850   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:01.968374   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:01.968404   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:01.968333   79098 retry.go:31] will retry after 468.477549ms: waiting for machine to come up
	I0729 18:27:01.350059   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetIP
	I0729 18:27:01.352945   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:01.353398   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:01.353429   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:01.353630   77859 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:01.357955   77859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:01.371879   77859 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-502055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:01.372034   77859 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:27:01.372100   77859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:01.412356   77859 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:27:01.412423   77859 ssh_runner.go:195] Run: which lz4
	I0729 18:27:01.417768   77859 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:27:01.422809   77859 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:27:01.422836   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:27:02.909800   77859 crio.go:462] duration metric: took 1.492088664s to copy over tarball
	I0729 18:27:02.909868   77859 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:26:59.995351   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:01.999130   77627 pod_ready.go:102] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:04.012357   77627 pod_ready.go:92] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.012385   77627 pod_ready.go:81] duration metric: took 10.523832262s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.012398   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zf5jf" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.025409   77627 pod_ready.go:92] pod "kube-proxy-zf5jf" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.025448   77627 pod_ready.go:81] duration metric: took 13.042254ms for pod "kube-proxy-zf5jf" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.025461   77627 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.036057   77627 pod_ready.go:92] pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:04.036078   77627 pod_ready.go:81] duration metric: took 10.608531ms for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:04.036090   77627 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:02.438066   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:02.438657   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:02.438686   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:02.438618   79098 retry.go:31] will retry after 601.056921ms: waiting for machine to come up
	I0729 18:27:03.041582   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:03.042097   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:03.042127   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:03.042040   79098 retry.go:31] will retry after 712.049848ms: waiting for machine to come up
	I0729 18:27:03.755536   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:03.756010   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:03.756040   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:03.755988   79098 retry.go:31] will retry after 1.092318096s: waiting for machine to come up
	I0729 18:27:04.849745   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:04.850202   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:04.850226   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:04.850147   79098 retry.go:31] will retry after 903.54457ms: waiting for machine to come up
	I0729 18:27:05.754781   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:05.755193   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:05.755218   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:05.755157   79098 retry.go:31] will retry after 1.693512671s: waiting for machine to come up
	I0729 18:27:05.188101   77859 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.27820184s)
	I0729 18:27:05.188132   77859 crio.go:469] duration metric: took 2.278304723s to extract the tarball
	I0729 18:27:05.188140   77859 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:27:05.227453   77859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:05.274530   77859 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:27:05.274560   77859 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:27:05.274571   77859 kubeadm.go:934] updating node { 192.168.61.244 8444 v1.30.3 crio true true} ...
	I0729 18:27:05.274708   77859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-502055 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:05.274788   77859 ssh_runner.go:195] Run: crio config
	I0729 18:27:05.320697   77859 cni.go:84] Creating CNI manager for ""
	I0729 18:27:05.320725   77859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:05.320741   77859 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:05.320774   77859 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.244 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-502055 NodeName:default-k8s-diff-port-502055 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:27:05.320948   77859 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.244
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-502055"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:05.321028   77859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:27:05.331541   77859 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:05.331609   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:05.341433   77859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 18:27:05.358696   77859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:27:05.376531   77859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 18:27:05.394349   77859 ssh_runner.go:195] Run: grep 192.168.61.244	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:05.398156   77859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:05.411839   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:05.561467   77859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:05.583184   77859 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055 for IP: 192.168.61.244
	I0729 18:27:05.583209   77859 certs.go:194] generating shared ca certs ...
	I0729 18:27:05.583251   77859 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:05.583406   77859 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:05.583460   77859 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:05.583473   77859 certs.go:256] generating profile certs ...
	I0729 18:27:05.583577   77859 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/client.key
	I0729 18:27:05.583642   77859 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.key.2edc4448
	I0729 18:27:05.583692   77859 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.key
	I0729 18:27:05.583835   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:05.583872   77859 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:05.583886   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:05.583917   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:05.583957   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:05.583991   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:05.584048   77859 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:05.584726   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:05.624996   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:05.670153   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:05.715354   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:05.743807   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 18:27:05.777366   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:05.802152   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:05.826974   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/default-k8s-diff-port-502055/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:27:05.850417   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:05.873185   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:05.899387   77859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:05.927963   77859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:05.947817   77859 ssh_runner.go:195] Run: openssl version
	I0729 18:27:05.955635   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:05.969765   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.974559   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.974606   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:05.980557   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:05.991819   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:06.004961   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.009999   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.010074   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:06.016045   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:06.027698   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:06.039648   77859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.045057   77859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.045130   77859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:06.051127   77859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:06.062761   77859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:06.068832   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:06.076652   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:06.084517   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:06.091125   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:06.097346   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:06.103428   77859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:06.109312   77859 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-502055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-502055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:06.109403   77859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:06.109440   77859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:06.153439   77859 cri.go:89] found id: ""
	I0729 18:27:06.153528   77859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:06.166412   77859 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:06.166434   77859 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:06.166486   77859 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:06.183064   77859 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:06.184168   77859 kubeconfig.go:125] found "default-k8s-diff-port-502055" server: "https://192.168.61.244:8444"
	I0729 18:27:06.186283   77859 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:06.197418   77859 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.244
	I0729 18:27:06.197444   77859 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:06.197454   77859 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:06.197506   77859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:06.237753   77859 cri.go:89] found id: ""
	I0729 18:27:06.237839   77859 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:06.257323   77859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:06.269157   77859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:06.269176   77859 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:06.269229   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 18:27:06.279313   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:06.279369   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:06.292141   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 18:27:06.303961   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:06.304028   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:06.316051   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 18:27:06.328004   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:06.328064   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:06.340357   77859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 18:27:06.352021   77859 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:06.352068   77859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:06.364479   77859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:06.375313   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:06.498692   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:07.853845   77859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.355105254s)
	I0729 18:27:07.853882   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.069616   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.144574   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:08.225236   77859 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:08.225336   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:08.725789   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:09.226271   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:09.270268   77859 api_server.go:72] duration metric: took 1.045028259s to wait for apiserver process to appear ...
	I0729 18:27:09.270298   77859 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:27:09.270320   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:09.270877   77859 api_server.go:269] stopped: https://192.168.61.244:8444/healthz: Get "https://192.168.61.244:8444/healthz": dial tcp 192.168.61.244:8444: connect: connection refused
	I0729 18:27:06.043838   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:08.044382   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:07.451087   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:07.451659   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:07.451688   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:07.451607   79098 retry.go:31] will retry after 1.734643072s: waiting for machine to come up
	I0729 18:27:09.188407   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:09.188963   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:09.188997   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:09.188900   79098 retry.go:31] will retry after 2.010973572s: waiting for machine to come up
	I0729 18:27:11.201171   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:11.201586   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:11.201620   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:11.201535   79098 retry.go:31] will retry after 3.178533437s: waiting for machine to come up
	I0729 18:27:09.771273   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.506136   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:27:12.506166   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:27:12.506179   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.518847   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:27:12.518881   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:27:12.771281   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:12.775798   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:27:12.775832   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:27:13.270383   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:13.281935   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:27:13.281975   77859 api_server.go:103] status: https://192.168.61.244:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:27:13.770440   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:27:13.776004   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 200:
	ok
	I0729 18:27:13.783210   77859 api_server.go:141] control plane version: v1.30.3
	I0729 18:27:13.783237   77859 api_server.go:131] duration metric: took 4.512933596s to wait for apiserver health ...
	I0729 18:27:13.783247   77859 cni.go:84] Creating CNI manager for ""
	I0729 18:27:13.783253   77859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:13.785148   77859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:27:13.786485   77859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:27:13.814986   77859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:27:13.860557   77859 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:27:13.872823   77859 system_pods.go:59] 8 kube-system pods found
	I0729 18:27:13.872864   77859 system_pods.go:61] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:27:13.872871   77859 system_pods.go:61] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:27:13.872879   77859 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:27:13.872885   77859 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:27:13.872891   77859 system_pods.go:61] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 18:27:13.872898   77859 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:27:13.872903   77859 system_pods.go:61] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:27:13.872912   77859 system_pods.go:61] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 18:27:13.872920   77859 system_pods.go:74] duration metric: took 12.342162ms to wait for pod list to return data ...
	I0729 18:27:13.872929   77859 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:27:13.879353   77859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:27:13.879384   77859 node_conditions.go:123] node cpu capacity is 2
	I0729 18:27:13.879396   77859 node_conditions.go:105] duration metric: took 6.459994ms to run NodePressure ...
	I0729 18:27:13.879416   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:14.172203   77859 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:27:14.178467   77859 kubeadm.go:739] kubelet initialised
	I0729 18:27:14.178490   77859 kubeadm.go:740] duration metric: took 6.259862ms waiting for restarted kubelet to initialise ...
	I0729 18:27:14.178499   77859 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:14.184872   77859 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.190847   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.190871   77859 pod_ready.go:81] duration metric: took 5.974917ms for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.190879   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.190886   77859 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.195570   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.195593   77859 pod_ready.go:81] duration metric: took 4.699847ms for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.195603   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.195610   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.199460   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.199480   77859 pod_ready.go:81] duration metric: took 3.863218ms for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.199489   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.199494   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.264725   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.264759   77859 pod_ready.go:81] duration metric: took 65.256372ms for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.264774   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.264781   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:14.664064   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-proxy-cgdm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.664089   77859 pod_ready.go:81] duration metric: took 399.300184ms for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:14.664100   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-proxy-cgdm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:14.664109   77859 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:10.044797   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:12.543553   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:15.064029   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.064059   77859 pod_ready.go:81] duration metric: took 399.939139ms for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:15.064074   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.064082   77859 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:15.464538   77859 pod_ready.go:97] node "default-k8s-diff-port-502055" hosting pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.464564   77859 pod_ready.go:81] duration metric: took 400.472397ms for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	E0729 18:27:15.464584   77859 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-502055" hosting pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.464592   77859 pod_ready.go:38] duration metric: took 1.286083847s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:15.464609   77859 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:27:15.478197   77859 ops.go:34] apiserver oom_adj: -16
	I0729 18:27:15.478220   77859 kubeadm.go:597] duration metric: took 9.311779975s to restartPrimaryControlPlane
	I0729 18:27:15.478229   77859 kubeadm.go:394] duration metric: took 9.368934157s to StartCluster
	I0729 18:27:15.478247   77859 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:15.478311   77859 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:27:15.479920   77859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:15.480159   77859 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.244 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:27:15.480244   77859 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:27:15.480322   77859 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-502055"
	I0729 18:27:15.480355   77859 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-502055"
	I0729 18:27:15.480356   77859 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-502055"
	W0729 18:27:15.480368   77859 addons.go:243] addon storage-provisioner should already be in state true
	I0729 18:27:15.480371   77859 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-502055"
	I0729 18:27:15.480396   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.480397   77859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-502055"
	I0729 18:27:15.480402   77859 config.go:182] Loaded profile config "default-k8s-diff-port-502055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:27:15.480415   77859 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-502055"
	W0729 18:27:15.480426   77859 addons.go:243] addon metrics-server should already be in state true
	I0729 18:27:15.480460   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.480709   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480723   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480738   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.480738   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.480914   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.480943   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.482004   77859 out.go:177] * Verifying Kubernetes components...
	I0729 18:27:15.483504   77859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:15.495748   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35469
	I0729 18:27:15.495965   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43251
	I0729 18:27:15.495977   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
	I0729 18:27:15.496147   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496324   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496433   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.496604   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496622   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496760   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496778   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496914   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.496930   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.496982   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497086   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497219   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.497644   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.497672   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.498076   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.498408   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.498449   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.501769   77859 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-502055"
	W0729 18:27:15.501790   77859 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:27:15.501814   77859 host.go:66] Checking if "default-k8s-diff-port-502055" exists ...
	I0729 18:27:15.502132   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.502163   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.516862   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0729 18:27:15.517070   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0729 18:27:15.517336   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.517525   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.517845   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.517877   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.518255   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.518356   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.518418   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.518657   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.518793   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.519009   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.520045   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I0729 18:27:15.520489   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.520613   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.520785   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.520962   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.520979   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.521295   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.521697   77859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:15.521712   77859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:15.522950   77859 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:15.522950   77859 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:27:15.524246   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:27:15.524268   77859 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:27:15.524291   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.524355   77859 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:27:15.524370   77859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:27:15.524388   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.527946   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528008   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528609   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.528645   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528678   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.528691   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.528723   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.528939   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.528953   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.529101   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.529150   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.529218   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.529524   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.529716   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.539969   77859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41273
	I0729 18:27:15.540410   77859 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:15.540887   77859 main.go:141] libmachine: Using API Version  1
	I0729 18:27:15.540913   77859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:15.541351   77859 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:15.541675   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetState
	I0729 18:27:15.543494   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .DriverName
	I0729 18:27:15.543728   77859 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:27:15.543744   77859 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:27:15.543762   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHHostname
	I0729 18:27:15.546809   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.547225   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:63:e1", ip: ""} in network mk-default-k8s-diff-port-502055: {Iface:virbr2 ExpiryTime:2024-07-29 19:26:51 +0000 UTC Type:0 Mac:52:54:00:ae:63:e1 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:default-k8s-diff-port-502055 Clientid:01:52:54:00:ae:63:e1}
	I0729 18:27:15.547250   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | domain default-k8s-diff-port-502055 has defined IP address 192.168.61.244 and MAC address 52:54:00:ae:63:e1 in network mk-default-k8s-diff-port-502055
	I0729 18:27:15.547405   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHPort
	I0729 18:27:15.547595   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHKeyPath
	I0729 18:27:15.547736   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .GetSSHUsername
	I0729 18:27:15.547859   77859 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/default-k8s-diff-port-502055/id_rsa Username:docker}
	I0729 18:27:15.662741   77859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:15.681179   77859 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-502055" to be "Ready" ...
	I0729 18:27:15.754691   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:27:15.767498   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:27:15.767515   77859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:27:15.781857   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:27:15.801619   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:27:15.801645   77859 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:27:15.823663   77859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:27:15.823690   77859 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:27:15.847827   77859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:27:16.818178   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.063432468s)
	I0729 18:27:16.818180   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.036288517s)
	I0729 18:27:16.818268   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818234   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818290   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818307   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818677   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.818680   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.818694   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.818710   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.818723   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818724   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.818735   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818740   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.818755   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.818766   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.818989   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.819000   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.819004   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.819017   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.819014   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) DBG | Closing plugin on server side
	I0729 18:27:16.824028   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.824047   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.824268   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.824292   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.877321   77859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029455089s)
	I0729 18:27:16.877378   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.877393   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.877718   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.877767   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.877790   77859 main.go:141] libmachine: Making call to close driver server
	I0729 18:27:16.877801   77859 main.go:141] libmachine: (default-k8s-diff-port-502055) Calling .Close
	I0729 18:27:16.878030   77859 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:27:16.878047   77859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:27:16.878061   77859 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-502055"
	I0729 18:27:16.879704   77859 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 18:27:14.381238   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:14.381648   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | unable to find current IP address of domain old-k8s-version-386663 in network mk-old-k8s-version-386663
	I0729 18:27:14.381677   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | I0729 18:27:14.381609   79098 retry.go:31] will retry after 4.005160817s: waiting for machine to come up
	I0729 18:27:16.880972   77859 addons.go:510] duration metric: took 1.400728317s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 18:27:17.685480   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:19.687853   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:15.042487   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:17.043250   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:19.045374   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:19.859418   77394 start.go:364] duration metric: took 54.906462088s to acquireMachinesLock for "no-preload-888056"
	I0729 18:27:19.859470   77394 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:27:19.859478   77394 fix.go:54] fixHost starting: 
	I0729 18:27:19.859850   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:27:19.859896   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:27:19.876798   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46323
	I0729 18:27:19.877254   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:27:19.877674   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:27:19.877709   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:27:19.878087   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:27:19.878257   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:19.878399   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:27:19.879875   77394 fix.go:112] recreateIfNeeded on no-preload-888056: state=Stopped err=<nil>
	I0729 18:27:19.879909   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	W0729 18:27:19.880054   77394 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:27:19.882098   77394 out.go:177] * Restarting existing kvm2 VM for "no-preload-888056" ...
	I0729 18:27:18.388470   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.388971   78080 main.go:141] libmachine: (old-k8s-version-386663) Found IP for machine: 192.168.50.70
	I0729 18:27:18.388989   78080 main.go:141] libmachine: (old-k8s-version-386663) Reserving static IP address...
	I0729 18:27:18.388999   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has current primary IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.389431   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "old-k8s-version-386663", mac: "52:54:00:78:b6:ac", ip: "192.168.50.70"} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.389459   78080 main.go:141] libmachine: (old-k8s-version-386663) Reserved static IP address: 192.168.50.70
	I0729 18:27:18.389477   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | skip adding static IP to network mk-old-k8s-version-386663 - found existing host DHCP lease matching {name: "old-k8s-version-386663", mac: "52:54:00:78:b6:ac", ip: "192.168.50.70"}
	I0729 18:27:18.389493   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Getting to WaitForSSH function...
	I0729 18:27:18.389515   78080 main.go:141] libmachine: (old-k8s-version-386663) Waiting for SSH to be available...
	I0729 18:27:18.391523   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.391916   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.391941   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.392062   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using SSH client type: external
	I0729 18:27:18.392088   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa (-rw-------)
	I0729 18:27:18.392119   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:27:18.392134   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | About to run SSH command:
	I0729 18:27:18.392150   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | exit 0
	I0729 18:27:18.514735   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | SSH cmd err, output: <nil>: 
	I0729 18:27:18.515114   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetConfigRaw
	I0729 18:27:18.515736   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:18.518194   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.518615   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.518651   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.518879   78080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/config.json ...
	I0729 18:27:18.519090   78080 machine.go:94] provisionDockerMachine start ...
	I0729 18:27:18.519113   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:18.519322   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.521434   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.521824   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.521846   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.521996   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.522181   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.522349   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.522514   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.522724   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.522960   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.522975   78080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:27:18.622960   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:27:18.622989   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.623249   78080 buildroot.go:166] provisioning hostname "old-k8s-version-386663"
	I0729 18:27:18.623277   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.623461   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.626009   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.626376   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.626406   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.626649   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.626876   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.627141   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.627301   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.627474   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.627669   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.627683   78080 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386663 && echo "old-k8s-version-386663" | sudo tee /etc/hostname
	I0729 18:27:18.748137   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386663
	
	I0729 18:27:18.748165   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.751546   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.751882   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.751916   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.752086   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:18.752270   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.752409   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:18.752550   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:18.752747   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:18.753004   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:18.753031   78080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386663/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:27:18.863358   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:27:18.863389   78080 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:27:18.863415   78080 buildroot.go:174] setting up certificates
	I0729 18:27:18.863425   78080 provision.go:84] configureAuth start
	I0729 18:27:18.863436   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetMachineName
	I0729 18:27:18.863754   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:18.866285   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.866641   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.866668   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.866797   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:18.868886   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.869241   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:18.869270   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:18.869404   78080 provision.go:143] copyHostCerts
	I0729 18:27:18.869459   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:27:18.869468   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:27:18.869522   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:27:18.869614   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:27:18.869624   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:27:18.869652   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:27:18.869740   78080 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:27:18.869750   78080 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:27:18.869772   78080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:27:18.869833   78080 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386663 san=[127.0.0.1 192.168.50.70 localhost minikube old-k8s-version-386663]
	I0729 18:27:19.142743   78080 provision.go:177] copyRemoteCerts
	I0729 18:27:19.142808   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:27:19.142842   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.145484   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.145843   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.145872   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.146092   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.146334   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.146532   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.146692   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.230725   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:27:19.255862   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 18:27:19.290922   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:27:19.317519   78080 provision.go:87] duration metric: took 454.081583ms to configureAuth
	I0729 18:27:19.317549   78080 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:27:19.317766   78080 config.go:182] Loaded profile config "old-k8s-version-386663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:27:19.317854   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.320636   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.321074   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.321110   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.321346   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.321603   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.321782   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.321959   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.322158   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:19.322336   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:19.322351   78080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:27:19.626713   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:27:19.626737   78080 machine.go:97] duration metric: took 1.107631867s to provisionDockerMachine
	I0729 18:27:19.626749   78080 start.go:293] postStartSetup for "old-k8s-version-386663" (driver="kvm2")
	I0729 18:27:19.626763   78080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:27:19.626834   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.627168   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:27:19.627197   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.629389   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.629751   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.629782   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.629907   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.630102   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.630302   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.630460   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.709702   78080 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:27:19.713879   78080 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:27:19.713913   78080 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:27:19.713994   78080 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:27:19.714093   78080 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:27:19.714215   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:27:19.725226   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:19.751727   78080 start.go:296] duration metric: took 124.964072ms for postStartSetup
	I0729 18:27:19.751767   78080 fix.go:56] duration metric: took 19.951972224s for fixHost
	I0729 18:27:19.751796   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.754481   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.754843   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.754877   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.755107   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.755321   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.755482   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.755663   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.755829   78080 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:19.756012   78080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0729 18:27:19.756024   78080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:27:19.859279   78080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277639.831700968
	
	I0729 18:27:19.859302   78080 fix.go:216] guest clock: 1722277639.831700968
	I0729 18:27:19.859309   78080 fix.go:229] Guest: 2024-07-29 18:27:19.831700968 +0000 UTC Remote: 2024-07-29 18:27:19.751770935 +0000 UTC m=+272.565043390 (delta=79.930033ms)
	I0729 18:27:19.859327   78080 fix.go:200] guest clock delta is within tolerance: 79.930033ms
	I0729 18:27:19.859332   78080 start.go:83] releasing machines lock for "old-k8s-version-386663", held for 20.059569122s
	I0729 18:27:19.859353   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.859661   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:19.862741   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.863225   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.863261   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.863449   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864092   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864309   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .DriverName
	I0729 18:27:19.864392   78080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:27:19.864432   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.864547   78080 ssh_runner.go:195] Run: cat /version.json
	I0729 18:27:19.864572   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHHostname
	I0729 18:27:19.867636   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.867798   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868019   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.868044   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868178   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.868330   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:19.868356   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:19.868360   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.868500   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.868587   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHPort
	I0729 18:27:19.868667   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.868754   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHKeyPath
	I0729 18:27:19.868910   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetSSHUsername
	I0729 18:27:19.869046   78080 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/old-k8s-version-386663/id_rsa Username:docker}
	I0729 18:27:19.947441   78080 ssh_runner.go:195] Run: systemctl --version
	I0729 18:27:19.967868   78080 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:20.114336   78080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:20.121716   78080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:20.121793   78080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:20.143272   78080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:20.143298   78080 start.go:495] detecting cgroup driver to use...
	I0729 18:27:20.143385   78080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:20.162433   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:20.178310   78080 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:20.178397   78080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:20.194091   78080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:20.209796   78080 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:20.341466   78080 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:20.514215   78080 docker.go:233] disabling docker service ...
	I0729 18:27:20.514338   78080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:20.531018   78080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:20.551839   78080 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:20.680430   78080 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:20.834782   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:20.852454   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:20.874962   78080 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 18:27:20.875017   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.886550   78080 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:20.886619   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.899344   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.914254   78080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:20.927308   78080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:20.939807   78080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:20.951648   78080 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:20.951738   78080 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:20.967918   78080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:20.979872   78080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:21.125398   78080 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:21.290736   78080 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:21.290816   78080 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:21.296922   78080 start.go:563] Will wait 60s for crictl version
	I0729 18:27:21.296987   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:21.302200   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:21.350783   78080 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:21.350919   78080 ssh_runner.go:195] Run: crio --version
	I0729 18:27:21.391539   78080 ssh_runner.go:195] Run: crio --version
	I0729 18:27:21.441225   78080 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 18:27:21.442583   78080 main.go:141] libmachine: (old-k8s-version-386663) Calling .GetIP
	I0729 18:27:21.446238   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:21.446728   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:b6:ac", ip: ""} in network mk-old-k8s-version-386663: {Iface:virbr3 ExpiryTime:2024-07-29 19:27:11 +0000 UTC Type:0 Mac:52:54:00:78:b6:ac Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:old-k8s-version-386663 Clientid:01:52:54:00:78:b6:ac}
	I0729 18:27:21.446756   78080 main.go:141] libmachine: (old-k8s-version-386663) DBG | domain old-k8s-version-386663 has defined IP address 192.168.50.70 and MAC address 52:54:00:78:b6:ac in network mk-old-k8s-version-386663
	I0729 18:27:21.446988   78080 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:21.452537   78080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:21.470394   78080 kubeadm.go:883] updating cluster {Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:21.470555   78080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:27:21.470610   78080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:21.531670   78080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:27:21.531742   78080 ssh_runner.go:195] Run: which lz4
	I0729 18:27:21.536436   78080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:27:21.542100   78080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:27:21.542139   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 18:27:19.883514   77394 main.go:141] libmachine: (no-preload-888056) Calling .Start
	I0729 18:27:19.883693   77394 main.go:141] libmachine: (no-preload-888056) Ensuring networks are active...
	I0729 18:27:19.884447   77394 main.go:141] libmachine: (no-preload-888056) Ensuring network default is active
	I0729 18:27:19.884847   77394 main.go:141] libmachine: (no-preload-888056) Ensuring network mk-no-preload-888056 is active
	I0729 18:27:19.885240   77394 main.go:141] libmachine: (no-preload-888056) Getting domain xml...
	I0729 18:27:19.886133   77394 main.go:141] libmachine: (no-preload-888056) Creating domain...
	I0729 18:27:21.226599   77394 main.go:141] libmachine: (no-preload-888056) Waiting to get IP...
	I0729 18:27:21.227673   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.228215   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.228278   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.228178   79288 retry.go:31] will retry after 290.676407ms: waiting for machine to come up
	I0729 18:27:21.520818   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.521458   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.521480   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.521360   79288 retry.go:31] will retry after 266.145355ms: waiting for machine to come up
	I0729 18:27:21.789603   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:21.790170   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:21.790200   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:21.790137   79288 retry.go:31] will retry after 464.137123ms: waiting for machine to come up
	I0729 18:27:22.255586   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:22.256159   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:22.256184   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:22.256098   79288 retry.go:31] will retry after 562.330595ms: waiting for machine to come up
	I0729 18:27:21.691280   77859 node_ready.go:53] node "default-k8s-diff-port-502055" has status "Ready":"False"
	I0729 18:27:23.188725   77859 node_ready.go:49] node "default-k8s-diff-port-502055" has status "Ready":"True"
	I0729 18:27:23.188758   77859 node_ready.go:38] duration metric: took 7.507549954s for node "default-k8s-diff-port-502055" to be "Ready" ...
	I0729 18:27:23.188772   77859 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:27:23.197714   77859 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.204037   77859 pod_ready.go:92] pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:23.204065   77859 pod_ready.go:81] duration metric: took 6.32123ms for pod "coredns-7db6d8ff4d-mk6mx" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.204086   77859 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.211765   77859 pod_ready.go:92] pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:23.211791   77859 pod_ready.go:81] duration metric: took 7.69614ms for pod "etcd-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:23.211803   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:21.544757   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:24.043649   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:23.329902   78080 crio.go:462] duration metric: took 1.793505279s to copy over tarball
	I0729 18:27:23.329979   78080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:27:26.453768   78080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.123735537s)
	I0729 18:27:26.453800   78080 crio.go:469] duration metric: took 3.123869338s to extract the tarball
	I0729 18:27:26.453809   78080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:27:26.501748   78080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:26.538093   78080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:27:26.538124   78080 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:27:26.538226   78080 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.538297   78080 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 18:27:26.538387   78080 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.538232   78080 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.538441   78080 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.538303   78080 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.538277   78080 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.538783   78080 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.540806   78080 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.540823   78080 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.540847   78080 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.540858   78080 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 18:27:26.540806   78080 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.540894   78080 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.540937   78080 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.540987   78080 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.700993   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.704402   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.712647   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.714034   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.715935   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.753888   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.758588   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 18:27:26.837981   78080 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:26.844473   78080 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 18:27:26.844532   78080 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 18:27:26.844578   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.877082   78080 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 18:27:26.877134   78080 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:26.877183   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.889792   78080 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 18:27:26.889887   78080 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:26.889842   78080 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 18:27:26.889944   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.889983   78080 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:26.890034   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.916338   78080 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 18:27:26.916388   78080 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:26.916440   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.916437   78080 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 18:27:26.916540   78080 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:26.916581   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:26.942747   78080 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 18:27:26.942794   78080 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 18:27:26.942839   78080 ssh_runner.go:195] Run: which crictl
	I0729 18:27:27.056976   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:27:27.056976   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 18:27:27.057045   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 18:27:27.057071   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:27:27.057101   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:27:27.057152   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:27:27.057178   78080 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 18:27:27.219396   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 18:27:22.820490   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:22.820969   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:22.820993   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:22.820906   79288 retry.go:31] will retry after 728.452145ms: waiting for machine to come up
	I0729 18:27:23.550655   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:23.551337   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:23.551361   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:23.551287   79288 retry.go:31] will retry after 782.583051ms: waiting for machine to come up
	I0729 18:27:24.335785   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:24.336257   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:24.336310   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:24.336235   79288 retry.go:31] will retry after 1.040109521s: waiting for machine to come up
	I0729 18:27:25.377676   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:25.378187   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:25.378231   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:25.378153   79288 retry.go:31] will retry after 1.276093038s: waiting for machine to come up
	I0729 18:27:26.655479   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:26.655922   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:26.655950   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:26.655872   79288 retry.go:31] will retry after 1.267687539s: waiting for machine to come up
	I0729 18:27:25.219175   77859 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.225735   77859 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.718741   77859 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.718772   77859 pod_ready.go:81] duration metric: took 4.506959705s for pod "kube-apiserver-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.718786   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.723687   77859 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.723709   77859 pod_ready.go:81] duration metric: took 4.915901ms for pod "kube-controller-manager-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.723720   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.728504   77859 pod_ready.go:92] pod "kube-proxy-cgdm8" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.728526   77859 pod_ready.go:81] duration metric: took 4.797185ms for pod "kube-proxy-cgdm8" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.728538   77859 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.733036   77859 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace has status "Ready":"True"
	I0729 18:27:27.733061   77859 pod_ready.go:81] duration metric: took 4.514471ms for pod "kube-scheduler-default-k8s-diff-port-502055" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:27.733073   77859 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	I0729 18:27:29.739966   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:26.044607   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:28.543664   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:27.219541   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 18:27:27.223329   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 18:27:27.223406   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 18:27:27.223450   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 18:27:27.223492   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 18:27:27.223536   78080 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 18:27:27.223567   78080 cache_images.go:92] duration metric: took 685.427642ms to LoadCachedImages
	W0729 18:27:27.223653   78080 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0729 18:27:27.223672   78080 kubeadm.go:934] updating node { 192.168.50.70 8443 v1.20.0 crio true true} ...
	I0729 18:27:27.223785   78080 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386663 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:27.223866   78080 ssh_runner.go:195] Run: crio config
	I0729 18:27:27.273186   78080 cni.go:84] Creating CNI manager for ""
	I0729 18:27:27.273207   78080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:27.273217   78080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:27.273241   78080 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.70 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386663 NodeName:old-k8s-version-386663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 18:27:27.273424   78080 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386663"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:27.273498   78080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 18:27:27.285247   78080 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:27.285327   78080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:27.295747   78080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 18:27:27.314192   78080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:27:27.331654   78080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 18:27:27.351717   78080 ssh_runner.go:195] Run: grep 192.168.50.70	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:27.356205   78080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:27.370446   78080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:27.509250   78080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:27.528776   78080 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663 for IP: 192.168.50.70
	I0729 18:27:27.528804   78080 certs.go:194] generating shared ca certs ...
	I0729 18:27:27.528823   78080 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:27.528991   78080 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:27.529045   78080 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:27.529061   78080 certs.go:256] generating profile certs ...
	I0729 18:27:27.529194   78080 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/client.key
	I0729 18:27:27.529308   78080 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key.71ea3f9f
	I0729 18:27:27.529364   78080 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key
	I0729 18:27:27.529529   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:27.529569   78080 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:27.529584   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:27.529614   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:27.529645   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:27.529689   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:27.529751   78080 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:27.530573   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:27.582122   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:27.626846   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:27.663609   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:27.700294   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 18:27:27.746614   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:27.785212   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:27.834479   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/old-k8s-version-386663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:27:27.866939   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:27.892613   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:27.919059   78080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:27.947557   78080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:27.968625   78080 ssh_runner.go:195] Run: openssl version
	I0729 18:27:27.976500   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:27.991016   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:27.996228   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:27.996285   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:28.002529   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:28.013844   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:28.025388   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.029982   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.030042   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:28.036362   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:28.050134   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:28.062742   78080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.067240   78080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.067293   78080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:28.072973   78080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:28.084143   78080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:28.089526   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:28.096556   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:28.103044   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:28.109337   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:28.115455   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:28.121449   78080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:28.127395   78080 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:28.127504   78080 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:28.127581   78080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:28.176772   78080 cri.go:89] found id: ""
	I0729 18:27:28.176837   78080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:28.187955   78080 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:28.187979   78080 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:28.188034   78080 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:28.197926   78080 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:28.199364   78080 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-386663" does not appear in /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:27:28.200382   78080 kubeconfig.go:62] /home/jenkins/minikube-integration/19345-11206/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-386663" cluster setting kubeconfig missing "old-k8s-version-386663" context setting]
	I0729 18:27:28.201737   78080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:28.287712   78080 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:28.300675   78080 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.70
	I0729 18:27:28.300716   78080 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:28.300728   78080 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:28.300795   78080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:28.343880   78080 cri.go:89] found id: ""
	I0729 18:27:28.343962   78080 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:28.362391   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:28.372805   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:28.372830   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:28.372882   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:27:28.383540   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:28.383629   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:28.396564   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:27:28.409151   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:28.409208   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:28.422243   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:27:28.434736   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:28.434839   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:28.447681   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:27:28.460008   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:28.460073   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:28.472647   78080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:28.484179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:28.634526   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.206575   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.449626   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.550859   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:29.681945   78080 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:29.682015   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:30.182098   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:30.682977   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:31.182152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:31.682468   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:32.183031   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:27.924957   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:27.925430   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:27.925461   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:27.925378   79288 retry.go:31] will retry after 1.455979038s: waiting for machine to come up
	I0729 18:27:29.383257   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:29.383769   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:29.383793   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:29.383722   79288 retry.go:31] will retry after 1.862834258s: waiting for machine to come up
	I0729 18:27:31.248806   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:31.249394   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:31.249414   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:31.249344   79288 retry.go:31] will retry after 3.203097967s: waiting for machine to come up
	I0729 18:27:32.242350   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:34.738663   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:31.043735   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:33.543152   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:32.682567   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:33.182100   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:33.682494   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.183075   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.683115   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:35.183094   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:35.683092   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:36.182173   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:36.682843   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:37.182324   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:34.453552   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:34.453906   77394 main.go:141] libmachine: (no-preload-888056) DBG | unable to find current IP address of domain no-preload-888056 in network mk-no-preload-888056
	I0729 18:27:34.453930   77394 main.go:141] libmachine: (no-preload-888056) DBG | I0729 18:27:34.453852   79288 retry.go:31] will retry after 3.166208105s: waiting for machine to come up
	I0729 18:27:36.739239   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:38.740812   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:35.543428   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:38.042603   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:37.622330   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.622738   77394 main.go:141] libmachine: (no-preload-888056) Found IP for machine: 192.168.72.80
	I0729 18:27:37.622767   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has current primary IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.622779   77394 main.go:141] libmachine: (no-preload-888056) Reserving static IP address...
	I0729 18:27:37.623108   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "no-preload-888056", mac: "52:54:00:b2:b0:1a", ip: "192.168.72.80"} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.623144   77394 main.go:141] libmachine: (no-preload-888056) DBG | skip adding static IP to network mk-no-preload-888056 - found existing host DHCP lease matching {name: "no-preload-888056", mac: "52:54:00:b2:b0:1a", ip: "192.168.72.80"}
	I0729 18:27:37.623160   77394 main.go:141] libmachine: (no-preload-888056) Reserved static IP address: 192.168.72.80
	I0729 18:27:37.623174   77394 main.go:141] libmachine: (no-preload-888056) Waiting for SSH to be available...
	I0729 18:27:37.623183   77394 main.go:141] libmachine: (no-preload-888056) DBG | Getting to WaitForSSH function...
	I0729 18:27:37.625391   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.625732   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.625759   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.625927   77394 main.go:141] libmachine: (no-preload-888056) DBG | Using SSH client type: external
	I0729 18:27:37.625948   77394 main.go:141] libmachine: (no-preload-888056) DBG | Using SSH private key: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa (-rw-------)
	I0729 18:27:37.625994   77394 main.go:141] libmachine: (no-preload-888056) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:27:37.626008   77394 main.go:141] libmachine: (no-preload-888056) DBG | About to run SSH command:
	I0729 18:27:37.626020   77394 main.go:141] libmachine: (no-preload-888056) DBG | exit 0
	I0729 18:27:37.750587   77394 main.go:141] libmachine: (no-preload-888056) DBG | SSH cmd err, output: <nil>: 
	I0729 18:27:37.750986   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetConfigRaw
	I0729 18:27:37.751717   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:37.754387   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.754753   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.754781   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.754995   77394 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/config.json ...
	I0729 18:27:37.755184   77394 machine.go:94] provisionDockerMachine start ...
	I0729 18:27:37.755207   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:37.755397   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.757649   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.757965   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.757988   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.758128   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.758297   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.758463   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.758599   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.758754   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.758918   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.758927   77394 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:27:37.862940   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:27:37.862976   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:37.863205   77394 buildroot.go:166] provisioning hostname "no-preload-888056"
	I0729 18:27:37.863234   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:37.863425   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.866190   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.866538   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.866565   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.866705   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.866878   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.867046   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.867166   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.867307   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.867478   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.867490   77394 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-888056 && echo "no-preload-888056" | sudo tee /etc/hostname
	I0729 18:27:37.985031   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-888056
	
	I0729 18:27:37.985070   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:37.987577   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.987917   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:37.987945   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:37.988126   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:37.988311   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.988469   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:37.988601   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:37.988786   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:37.988994   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:37.989012   77394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-888056' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-888056/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-888056' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:27:38.103831   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:27:38.103853   77394 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19345-11206/.minikube CaCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19345-11206/.minikube}
	I0729 18:27:38.103870   77394 buildroot.go:174] setting up certificates
	I0729 18:27:38.103878   77394 provision.go:84] configureAuth start
	I0729 18:27:38.103886   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetMachineName
	I0729 18:27:38.104166   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:38.107080   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.107493   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.107521   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.107690   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.110087   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.110495   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.110520   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.110738   77394 provision.go:143] copyHostCerts
	I0729 18:27:38.110793   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem, removing ...
	I0729 18:27:38.110802   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem
	I0729 18:27:38.110853   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/ca.pem (1078 bytes)
	I0729 18:27:38.110968   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem, removing ...
	I0729 18:27:38.110978   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem
	I0729 18:27:38.110998   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/cert.pem (1123 bytes)
	I0729 18:27:38.111056   77394 exec_runner.go:144] found /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem, removing ...
	I0729 18:27:38.111063   77394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem
	I0729 18:27:38.111080   77394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19345-11206/.minikube/key.pem (1675 bytes)
	I0729 18:27:38.111149   77394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem org=jenkins.no-preload-888056 san=[127.0.0.1 192.168.72.80 localhost minikube no-preload-888056]
	I0729 18:27:38.327305   77394 provision.go:177] copyRemoteCerts
	I0729 18:27:38.327378   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:27:38.327407   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.330008   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.330304   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.330327   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.330516   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.330739   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.330908   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.331071   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:38.414678   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:27:38.443418   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:27:38.469248   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 18:27:38.494014   77394 provision.go:87] duration metric: took 390.106553ms to configureAuth
	I0729 18:27:38.494049   77394 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:27:38.494245   77394 config.go:182] Loaded profile config "no-preload-888056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:27:38.494357   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.497162   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.497586   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.497620   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.497946   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.498137   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.498328   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.498566   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.498766   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:38.498940   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:38.498955   77394 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:27:38.762438   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:27:38.762462   77394 machine.go:97] duration metric: took 1.007266999s to provisionDockerMachine
	I0729 18:27:38.762473   77394 start.go:293] postStartSetup for "no-preload-888056" (driver="kvm2")
	I0729 18:27:38.762484   77394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:27:38.762511   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:38.762797   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:27:38.762832   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.765677   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.766031   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.766054   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.766222   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.766432   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.766621   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.766774   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:38.854492   77394 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:27:38.858934   77394 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:27:38.858962   77394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/addons for local assets ...
	I0729 18:27:38.859041   77394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19345-11206/.minikube/files for local assets ...
	I0729 18:27:38.859136   77394 filesync.go:149] local asset: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem -> 183932.pem in /etc/ssl/certs
	I0729 18:27:38.859251   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:27:38.869459   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:38.894422   77394 start.go:296] duration metric: took 131.935433ms for postStartSetup
	I0729 18:27:38.894466   77394 fix.go:56] duration metric: took 19.034987866s for fixHost
	I0729 18:27:38.894492   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:38.897266   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.897654   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:38.897684   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:38.897890   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:38.898102   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.898250   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:38.898356   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:38.898547   77394 main.go:141] libmachine: Using SSH client type: native
	I0729 18:27:38.898721   77394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0729 18:27:38.898732   77394 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:27:39.003526   77394 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277658.970659996
	
	I0729 18:27:39.003571   77394 fix.go:216] guest clock: 1722277658.970659996
	I0729 18:27:39.003581   77394 fix.go:229] Guest: 2024-07-29 18:27:38.970659996 +0000 UTC Remote: 2024-07-29 18:27:38.8944731 +0000 UTC m=+356.533366653 (delta=76.186896ms)
	I0729 18:27:39.003600   77394 fix.go:200] guest clock delta is within tolerance: 76.186896ms
	I0729 18:27:39.003605   77394 start.go:83] releasing machines lock for "no-preload-888056", held for 19.144159359s
	I0729 18:27:39.003622   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.003881   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:39.006550   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.006850   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.006886   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.007005   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007597   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007779   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:27:39.007879   77394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:27:39.007939   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:39.008001   77394 ssh_runner.go:195] Run: cat /version.json
	I0729 18:27:39.008026   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:27:39.010634   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.010941   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.010965   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.010984   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.011257   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:39.011442   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:39.011474   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:39.011487   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:39.011632   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:27:39.011678   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:39.011782   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:27:39.011951   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:27:39.011985   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:39.012094   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:27:39.114446   77394 ssh_runner.go:195] Run: systemctl --version
	I0729 18:27:39.120848   77394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:27:39.266976   77394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:27:39.273603   77394 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:27:39.273670   77394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:27:39.295511   77394 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:27:39.295533   77394 start.go:495] detecting cgroup driver to use...
	I0729 18:27:39.295593   77394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:27:39.313692   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:27:39.328435   77394 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:27:39.328502   77394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:27:39.342580   77394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:27:39.356694   77394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:27:39.474555   77394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:27:39.632766   77394 docker.go:233] disabling docker service ...
	I0729 18:27:39.632827   77394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:27:39.648961   77394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:27:39.663277   77394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:27:39.813329   77394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:27:39.944017   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:27:39.957624   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:27:39.976348   77394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 18:27:39.976401   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:39.986672   77394 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:27:39.986735   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:39.996867   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.007547   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.018141   77394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:27:40.029258   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.040007   77394 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.057611   77394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:27:40.068107   77394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:27:40.077798   77394 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:27:40.077877   77394 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:27:40.091040   77394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:27:40.100846   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:40.227049   77394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:27:40.368213   77394 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:27:40.368295   77394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:27:40.374168   77394 start.go:563] Will wait 60s for crictl version
	I0729 18:27:40.374239   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.378268   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:27:40.422500   77394 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:27:40.422579   77394 ssh_runner.go:195] Run: crio --version
	I0729 18:27:40.451170   77394 ssh_runner.go:195] Run: crio --version
	I0729 18:27:40.481789   77394 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 18:27:37.682180   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:38.182453   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:38.682639   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:39.182874   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:39.682496   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.182727   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.683073   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:41.182060   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:41.682421   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:42.182813   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:40.483209   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetIP
	I0729 18:27:40.486303   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:40.486738   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:27:40.486768   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:27:40.487032   77394 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 18:27:40.491318   77394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:40.505196   77394 kubeadm.go:883] updating cluster {Name:no-preload-888056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:27:40.505303   77394 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 18:27:40.505333   77394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:27:40.541356   77394 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 18:27:40.541380   77394 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:27:40.541445   77394 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.541452   77394 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.541465   77394 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.541495   77394 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.541503   77394 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.541527   77394 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.541583   77394 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.542060   77394 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 18:27:40.543507   77394 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.543519   77394 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.543505   77394 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 18:27:40.543535   77394 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.543504   77394 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.543761   77394 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.543799   77394 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.543999   77394 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.693026   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.709057   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.715664   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.720337   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.746126   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 18:27:40.748805   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.759200   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.768613   77394 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 18:27:40.768659   77394 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.768705   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.812940   77394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.852143   77394 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 18:27:40.852173   77394 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 18:27:40.852191   77394 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.852206   77394 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.852237   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.852249   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.890477   77394 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 18:27:40.890521   77394 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:40.890566   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991390   77394 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 18:27:40.991435   77394 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:40.991462   77394 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 18:27:40.991486   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991501   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:27:40.991508   77394 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:40.991548   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991556   77394 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 18:27:40.991579   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 18:27:40.991595   77394 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:40.991609   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:27:40.991654   77394 ssh_runner.go:195] Run: which crictl
	I0729 18:27:40.991694   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:27:41.087626   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:41.087736   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:41.087742   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:41.087782   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:27:41.087819   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 18:27:41.087830   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:41.087883   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.091774   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 18:27:41.091828   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:27:41.091858   77394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:27:41.091873   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:41.104679   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 18:27:41.104702   77394 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.104733   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 18:27:41.104750   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:27:41.155992   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 18:27:41.156114   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 18:27:41.156227   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:41.169410   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 18:27:41.169535   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:41.176103   77394 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:41.176116   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 18:27:41.176214   77394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:41.241044   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:43.739887   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:40.543004   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:43.044338   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:42.682911   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:43.182279   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:43.682506   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.182109   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.682593   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:45.183002   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:45.682275   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:46.182491   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:46.683027   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:47.182311   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:44.874768   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.769989933s)
	I0729 18:27:44.874798   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 18:27:44.874827   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:44.874861   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (3.71860957s)
	I0729 18:27:44.874894   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 18:27:44.874906   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:27:44.874930   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.705380577s)
	I0729 18:27:44.874947   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 18:27:44.874972   77394 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (3.698734733s)
	I0729 18:27:44.875001   77394 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 18:27:46.333065   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.458135446s)
	I0729 18:27:46.333109   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 18:27:46.333137   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:46.333175   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:27:45.739935   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.740654   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:45.542272   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.543683   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:47.682979   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.183024   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.682708   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:49.182427   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:49.682335   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:50.182146   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:50.682716   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:51.182231   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:51.683106   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:52.182739   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:48.194389   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.861190748s)
	I0729 18:27:48.194419   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 18:27:48.194443   77394 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:48.194483   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:27:50.159353   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.964849018s)
	I0729 18:27:50.159384   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 18:27:50.159427   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:50.159494   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:27:52.256998   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.097482067s)
	I0729 18:27:52.257038   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 18:27:52.257075   77394 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:52.257125   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:27:50.239878   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.740167   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:50.042299   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.042567   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:54.043462   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:52.682628   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:53.182081   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:53.682919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:54.183194   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:54.682506   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:55.182992   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:55.682152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:56.183083   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:56.682897   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.182789   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:52.899503   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 18:27:52.899539   77394 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:52.899594   77394 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:27:54.868011   77394 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.968389841s)
	I0729 18:27:54.868043   77394 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19345-11206/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 18:27:54.868075   77394 cache_images.go:123] Successfully loaded all cached images
	I0729 18:27:54.868080   77394 cache_images.go:92] duration metric: took 14.326689217s to LoadCachedImages
	I0729 18:27:54.868088   77394 kubeadm.go:934] updating node { 192.168.72.80 8443 v1.31.0-beta.0 crio true true} ...
	I0729 18:27:54.868226   77394 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-888056 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:27:54.868305   77394 ssh_runner.go:195] Run: crio config
	I0729 18:27:54.928569   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:27:54.928591   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:27:54.928604   77394 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:27:54.928633   77394 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.80 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-888056 NodeName:no-preload-888056 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:27:54.928800   77394 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-888056"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:27:54.928871   77394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 18:27:54.939479   77394 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:27:54.939534   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:27:54.948928   77394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0729 18:27:54.966700   77394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 18:27:54.984218   77394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0729 18:27:55.000813   77394 ssh_runner.go:195] Run: grep 192.168.72.80	control-plane.minikube.internal$ /etc/hosts
	I0729 18:27:55.004529   77394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:27:55.016140   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:27:55.141053   77394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:27:55.158874   77394 certs.go:68] Setting up /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056 for IP: 192.168.72.80
	I0729 18:27:55.158897   77394 certs.go:194] generating shared ca certs ...
	I0729 18:27:55.158918   77394 certs.go:226] acquiring lock for ca certs: {Name:mk128e8b8d2ff348f67bc6978aaf4e66f8542ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:27:55.159074   77394 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key
	I0729 18:27:55.159136   77394 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key
	I0729 18:27:55.159150   77394 certs.go:256] generating profile certs ...
	I0729 18:27:55.159245   77394 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/client.key
	I0729 18:27:55.159320   77394 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.key.f09a151f
	I0729 18:27:55.159373   77394 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.key
	I0729 18:27:55.159511   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem (1338 bytes)
	W0729 18:27:55.159552   77394 certs.go:480] ignoring /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393_empty.pem, impossibly tiny 0 bytes
	I0729 18:27:55.159566   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:27:55.159600   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:27:55.159641   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:27:55.159680   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/certs/key.pem (1675 bytes)
	I0729 18:27:55.159734   77394 certs.go:484] found cert: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem (1708 bytes)
	I0729 18:27:55.160575   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:27:55.211823   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:27:55.248637   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:27:55.287972   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:27:55.317920   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 18:27:55.346034   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:27:55.377569   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:27:55.402593   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/no-preload-888056/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 18:27:55.427969   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/certs/18393.pem --> /usr/share/ca-certificates/18393.pem (1338 bytes)
	I0729 18:27:55.452060   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/ssl/certs/183932.pem --> /usr/share/ca-certificates/183932.pem (1708 bytes)
	I0729 18:27:55.476635   77394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19345-11206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:27:55.500831   77394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:27:55.518744   77394 ssh_runner.go:195] Run: openssl version
	I0729 18:27:55.524865   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18393.pem && ln -fs /usr/share/ca-certificates/18393.pem /etc/ssl/certs/18393.pem"
	I0729 18:27:55.536601   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.541752   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:08 /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.541807   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18393.pem
	I0729 18:27:55.548070   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18393.pem /etc/ssl/certs/51391683.0"
	I0729 18:27:55.559866   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183932.pem && ln -fs /usr/share/ca-certificates/183932.pem /etc/ssl/certs/183932.pem"
	I0729 18:27:55.571833   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.576304   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:08 /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.576342   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183932.pem
	I0729 18:27:55.582204   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183932.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:27:55.594531   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:27:55.605773   77394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.610585   77394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.610633   77394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:27:55.616478   77394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:27:55.628160   77394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:27:55.632691   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:27:55.638793   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:27:55.644678   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:27:55.651117   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:27:55.657397   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:27:55.663351   77394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:27:55.670080   77394 kubeadm.go:392] StartCluster: {Name:no-preload-888056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-888056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:27:55.670183   77394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:27:55.670248   77394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:55.712280   77394 cri.go:89] found id: ""
	I0729 18:27:55.712343   77394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:27:55.722878   77394 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:27:55.722898   77394 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:27:55.722935   77394 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:27:55.732704   77394 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:27:55.733646   77394 kubeconfig.go:125] found "no-preload-888056" server: "https://192.168.72.80:8443"
	I0729 18:27:55.736512   77394 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:27:55.748360   77394 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.80
	I0729 18:27:55.748403   77394 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:27:55.748416   77394 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:27:55.748464   77394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:27:55.789773   77394 cri.go:89] found id: ""
	I0729 18:27:55.789854   77394 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:27:55.808905   77394 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:27:55.819969   77394 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:27:55.819991   77394 kubeadm.go:157] found existing configuration files:
	
	I0729 18:27:55.820064   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:27:55.829392   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:27:55.829445   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:27:55.838934   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:27:55.848659   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:27:55.848720   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:27:55.859490   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:27:55.870024   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:27:55.870076   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:27:55.881599   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:27:55.891805   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:27:55.891869   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:27:55.901750   77394 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:27:55.911525   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:56.021031   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.075545   77394 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.054482988s)
	I0729 18:27:57.075571   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.302701   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:57.382837   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:27:55.261397   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:57.738688   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:59.739828   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:56.543870   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:59.043285   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:27:57.682237   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.182211   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.682456   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:59.182669   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:59.682863   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:00.182261   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:00.682993   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:01.182832   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:01.682899   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:02.182765   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.492480   77394 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:27:57.492580   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:57.993240   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.492965   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:27:58.517442   77394 api_server.go:72] duration metric: took 1.024961129s to wait for apiserver process to appear ...
	I0729 18:27:58.517479   77394 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:27:58.517505   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:27:58.518046   77394 api_server.go:269] stopped: https://192.168.72.80:8443/healthz: Get "https://192.168.72.80:8443/healthz": dial tcp 192.168.72.80:8443: connect: connection refused
	I0729 18:27:59.017614   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.088238   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:28:02.088265   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:28:02.088277   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.147855   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:28:02.147882   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:28:02.518439   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:02.525213   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:28:02.525247   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:28:03.018275   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:03.024993   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:28:03.025023   77394 api_server.go:103] status: https://192.168.72.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:28:03.517564   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:28:03.523409   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 200:
	ok
	I0729 18:28:03.529656   77394 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 18:28:03.529687   77394 api_server.go:131] duration metric: took 5.01219984s to wait for apiserver health ...
	I0729 18:28:03.529698   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:28:03.529706   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:28:03.531527   77394 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:28:01.740935   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:03.743806   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:01.043882   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:03.542540   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:02.682331   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.182154   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.682499   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:04.182355   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:04.682338   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:05.182107   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:05.683125   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:06.182481   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:06.683153   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:07.182992   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:03.532788   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:28:03.544878   77394 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:28:03.586100   77394 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:28:03.604975   77394 system_pods.go:59] 8 kube-system pods found
	I0729 18:28:03.605012   77394 system_pods.go:61] "coredns-5cfdc65f69-bg5j4" [7a26ffbb-014c-4cf7-b302-214cf78374bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:28:03.605022   77394 system_pods.go:61] "etcd-no-preload-888056" [d76f2eb7-67d9-4ba0-8d2f-acfc78559651] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:28:03.605036   77394 system_pods.go:61] "kube-apiserver-no-preload-888056" [1dbea0ee-58be-47ca-b4ab-94065413768d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:28:03.605044   77394 system_pods.go:61] "kube-controller-manager-no-preload-888056" [fb8ce9d9-2953-4b91-8734-87bd38a63eb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:28:03.605051   77394 system_pods.go:61] "kube-proxy-w5z2f" [2425da76-cf2d-41c9-b8db-1370ab5333c5] Running
	I0729 18:28:03.605059   77394 system_pods.go:61] "kube-scheduler-no-preload-888056" [9958567f-116d-4094-9e7e-6208f7358486] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:28:03.605066   77394 system_pods.go:61] "metrics-server-78fcd8795b-jcdcw" [c506a5f8-d569-4c3d-9b6e-21b9fc63a86a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:28:03.605073   77394 system_pods.go:61] "storage-provisioner" [ccbc4fa6-1237-46ca-ac80-34972b9a43df] Running
	I0729 18:28:03.605082   77394 system_pods.go:74] duration metric: took 18.959807ms to wait for pod list to return data ...
	I0729 18:28:03.605095   77394 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:28:03.609225   77394 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:28:03.609249   77394 node_conditions.go:123] node cpu capacity is 2
	I0729 18:28:03.609261   77394 node_conditions.go:105] duration metric: took 4.16099ms to run NodePressure ...
	I0729 18:28:03.609278   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:28:03.881440   77394 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:28:03.886401   77394 kubeadm.go:739] kubelet initialised
	I0729 18:28:03.886429   77394 kubeadm.go:740] duration metric: took 4.958282ms waiting for restarted kubelet to initialise ...
	I0729 18:28:03.886440   77394 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:28:03.891373   77394 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:05.900595   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:06.239029   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:08.240309   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:06.042541   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:08.043322   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:07.682582   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.182094   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.682613   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:09.182936   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:09.682444   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:10.182354   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:10.682183   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:11.182502   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:11.682466   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:12.182113   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:08.397084   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.399546   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.897981   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:10.898006   77394 pod_ready.go:81] duration metric: took 7.006606905s for pod "coredns-5cfdc65f69-bg5j4" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.898014   77394 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.903064   77394 pod_ready.go:92] pod "etcd-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:10.903088   77394 pod_ready.go:81] duration metric: took 5.066249ms for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.903099   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:11.409319   77394 pod_ready.go:92] pod "kube-apiserver-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:11.409344   77394 pod_ready.go:81] duration metric: took 506.238678ms for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:11.409353   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:10.250001   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:12.741099   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:10.542146   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:13.042422   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:12.682526   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.183014   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.682449   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:14.182138   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:14.683065   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:15.182838   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:15.682680   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:16.182714   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:16.682116   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:17.182842   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:13.415469   77394 pod_ready.go:102] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:13.917111   77394 pod_ready.go:92] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.917134   77394 pod_ready.go:81] duration metric: took 2.507774546s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.917149   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w5z2f" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.922045   77394 pod_ready.go:92] pod "kube-proxy-w5z2f" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.922069   77394 pod_ready.go:81] duration metric: took 4.912892ms for pod "kube-proxy-w5z2f" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.922080   77394 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.927633   77394 pod_ready.go:92] pod "kube-scheduler-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:28:13.927654   77394 pod_ready.go:81] duration metric: took 5.565409ms for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:13.927666   77394 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" ...
	I0729 18:28:15.934081   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:15.240105   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.740031   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:19.740077   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:15.042540   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.043335   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:19.542061   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:17.683114   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:18.182919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:18.683103   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:19.182074   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:19.683031   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:20.182701   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:20.682749   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:21.182949   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:21.683001   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:22.182167   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:17.935797   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:20.434416   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:21.740735   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.238828   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:21.544060   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.042058   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:22.682723   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:23.182510   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:23.683084   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:24.182220   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:24.682699   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:25.182288   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:25.682433   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:26.182919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:26.682851   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:27.182225   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:22.435465   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:24.935088   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:26.239694   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:28.240174   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:26.542381   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:29.043706   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:27.682408   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:28.182187   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:28.683034   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:29.182922   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:29.682990   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:29.683063   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:29.730368   78080 cri.go:89] found id: ""
	I0729 18:28:29.730405   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.730413   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:29.730419   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:29.730473   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:29.770368   78080 cri.go:89] found id: ""
	I0729 18:28:29.770398   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.770409   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:29.770426   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:29.770479   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:29.809873   78080 cri.go:89] found id: ""
	I0729 18:28:29.809898   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.809906   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:29.809911   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:29.809970   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:29.848980   78080 cri.go:89] found id: ""
	I0729 18:28:29.849006   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.849016   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:29.849023   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:29.849082   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:29.887261   78080 cri.go:89] found id: ""
	I0729 18:28:29.887292   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.887302   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:29.887311   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:29.887388   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:29.927011   78080 cri.go:89] found id: ""
	I0729 18:28:29.927041   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.927051   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:29.927058   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:29.927122   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:29.965577   78080 cri.go:89] found id: ""
	I0729 18:28:29.965609   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.965619   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:29.965625   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:29.965693   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:29.999180   78080 cri.go:89] found id: ""
	I0729 18:28:29.999210   78080 logs.go:276] 0 containers: []
	W0729 18:28:29.999222   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:29.999233   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:29.999253   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:30.049401   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:30.049433   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:30.063903   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:30.063939   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:30.194776   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:30.194797   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:30.194812   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:30.261861   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:30.261906   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:27.434837   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:29.435257   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:31.435297   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:30.738940   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:32.740748   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:31.542494   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:33.542872   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:32.801821   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:32.814741   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:32.814815   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:32.853490   78080 cri.go:89] found id: ""
	I0729 18:28:32.853514   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.853522   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:32.853530   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:32.853580   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:32.890314   78080 cri.go:89] found id: ""
	I0729 18:28:32.890339   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.890349   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:32.890356   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:32.890435   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:32.928231   78080 cri.go:89] found id: ""
	I0729 18:28:32.928255   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.928262   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:32.928268   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:32.928314   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:32.964024   78080 cri.go:89] found id: ""
	I0729 18:28:32.964054   78080 logs.go:276] 0 containers: []
	W0729 18:28:32.964065   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:32.964072   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:32.964136   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:33.002099   78080 cri.go:89] found id: ""
	I0729 18:28:33.002127   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.002140   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:33.002146   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:33.002195   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:33.042238   78080 cri.go:89] found id: ""
	I0729 18:28:33.042265   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.042273   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:33.042278   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:33.042331   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:33.078715   78080 cri.go:89] found id: ""
	I0729 18:28:33.078741   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.078750   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:33.078756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:33.078816   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:33.123304   78080 cri.go:89] found id: ""
	I0729 18:28:33.123334   78080 logs.go:276] 0 containers: []
	W0729 18:28:33.123342   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:33.123351   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:33.123366   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:33.198950   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:33.198994   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:33.223566   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:33.223594   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:33.306500   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:33.306526   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:33.306541   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:33.379386   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:33.379421   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:35.926834   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:35.942218   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:35.942296   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:35.980115   78080 cri.go:89] found id: ""
	I0729 18:28:35.980142   78080 logs.go:276] 0 containers: []
	W0729 18:28:35.980153   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:35.980159   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:35.980221   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:36.015354   78080 cri.go:89] found id: ""
	I0729 18:28:36.015379   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.015387   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:36.015392   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:36.015456   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:36.056411   78080 cri.go:89] found id: ""
	I0729 18:28:36.056435   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.056445   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:36.056451   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:36.056499   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:36.099153   78080 cri.go:89] found id: ""
	I0729 18:28:36.099180   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.099188   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:36.099193   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:36.099241   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:36.133427   78080 cri.go:89] found id: ""
	I0729 18:28:36.133459   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.133470   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:36.133477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:36.133544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:36.168619   78080 cri.go:89] found id: ""
	I0729 18:28:36.168646   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.168657   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:36.168664   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:36.168723   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:36.203636   78080 cri.go:89] found id: ""
	I0729 18:28:36.203666   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.203676   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:36.203684   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:36.203747   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:36.246495   78080 cri.go:89] found id: ""
	I0729 18:28:36.246523   78080 logs.go:276] 0 containers: []
	W0729 18:28:36.246533   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:36.246544   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:36.246561   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:36.260630   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:36.260656   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:36.337406   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:36.337424   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:36.337435   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:36.410016   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:36.410049   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:36.453458   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:36.453492   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:33.435859   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.934955   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.240070   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:37.739406   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.740035   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:35.543153   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:37.543467   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.543573   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:39.004147   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:39.018217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:39.018279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:39.054130   78080 cri.go:89] found id: ""
	I0729 18:28:39.054155   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.054166   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:39.054172   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:39.054219   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:39.090458   78080 cri.go:89] found id: ""
	I0729 18:28:39.090482   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.090490   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:39.090501   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:39.090548   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:39.126933   78080 cri.go:89] found id: ""
	I0729 18:28:39.126960   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.126971   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:39.126978   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:39.127042   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:39.162324   78080 cri.go:89] found id: ""
	I0729 18:28:39.162352   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.162381   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:39.162389   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:39.162450   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:39.202440   78080 cri.go:89] found id: ""
	I0729 18:28:39.202464   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.202471   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:39.202477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:39.202537   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:39.238314   78080 cri.go:89] found id: ""
	I0729 18:28:39.238342   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.238352   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:39.238368   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:39.238436   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:39.275545   78080 cri.go:89] found id: ""
	I0729 18:28:39.275584   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.275592   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:39.275598   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:39.275663   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:39.311575   78080 cri.go:89] found id: ""
	I0729 18:28:39.311603   78080 logs.go:276] 0 containers: []
	W0729 18:28:39.311614   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:39.311624   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:39.311643   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:39.367667   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:39.367711   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:39.381823   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:39.381852   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:39.456060   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:39.456083   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:39.456100   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:39.531747   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:39.531784   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:42.077771   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:42.092424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:42.092512   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:42.128710   78080 cri.go:89] found id: ""
	I0729 18:28:42.128744   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.128756   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:42.128765   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:42.128834   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:42.166092   78080 cri.go:89] found id: ""
	I0729 18:28:42.166126   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.166133   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:42.166138   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:42.166186   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:42.200955   78080 cri.go:89] found id: ""
	I0729 18:28:42.200981   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.200989   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:42.200994   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:42.201053   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:38.435476   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:40.935166   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:42.240354   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:44.739322   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:41.543640   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:43.543781   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:42.240176   78080 cri.go:89] found id: ""
	I0729 18:28:42.240203   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.240212   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:42.240219   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:42.240279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:42.279844   78080 cri.go:89] found id: ""
	I0729 18:28:42.279872   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.279880   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:42.279885   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:42.279946   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:42.313071   78080 cri.go:89] found id: ""
	I0729 18:28:42.313099   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.313108   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:42.313114   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:42.313187   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:42.348540   78080 cri.go:89] found id: ""
	I0729 18:28:42.348566   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.348573   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:42.348580   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:42.348630   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:42.384688   78080 cri.go:89] found id: ""
	I0729 18:28:42.384714   78080 logs.go:276] 0 containers: []
	W0729 18:28:42.384725   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:42.384736   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:42.384750   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:42.399178   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:42.399206   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:42.472903   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:42.472921   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:42.472937   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:42.558541   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:42.558573   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:42.599403   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:42.599432   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:45.154026   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:45.167130   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:45.167200   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:45.203627   78080 cri.go:89] found id: ""
	I0729 18:28:45.203654   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.203663   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:45.203668   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:45.203714   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:45.242293   78080 cri.go:89] found id: ""
	I0729 18:28:45.242316   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.242325   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:45.242332   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:45.242403   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:45.282253   78080 cri.go:89] found id: ""
	I0729 18:28:45.282275   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.282282   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:45.282288   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:45.282335   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:45.320151   78080 cri.go:89] found id: ""
	I0729 18:28:45.320175   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.320183   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:45.320189   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:45.320250   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:45.356210   78080 cri.go:89] found id: ""
	I0729 18:28:45.356236   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.356247   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:45.356254   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:45.356316   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:45.393083   78080 cri.go:89] found id: ""
	I0729 18:28:45.393116   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.393131   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:45.393139   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:45.393199   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:45.430235   78080 cri.go:89] found id: ""
	I0729 18:28:45.430263   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.430274   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:45.430282   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:45.430346   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:45.463068   78080 cri.go:89] found id: ""
	I0729 18:28:45.463132   78080 logs.go:276] 0 containers: []
	W0729 18:28:45.463143   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:45.463155   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:45.463203   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:45.541411   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:45.541441   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:45.581967   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:45.582001   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:45.639427   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:45.639459   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:45.655715   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:45.655741   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:45.725820   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:42.943815   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:45.435444   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:46.739873   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:49.240293   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:46.042576   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:48.042735   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:48.226252   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:48.240419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:48.240494   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:48.271506   78080 cri.go:89] found id: ""
	I0729 18:28:48.271538   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.271550   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:48.271557   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:48.271615   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:48.305163   78080 cri.go:89] found id: ""
	I0729 18:28:48.305186   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.305198   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:48.305203   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:48.305252   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:48.336453   78080 cri.go:89] found id: ""
	I0729 18:28:48.336480   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.336492   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:48.336500   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:48.336557   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:48.368690   78080 cri.go:89] found id: ""
	I0729 18:28:48.368713   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.368720   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:48.368725   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:48.368784   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:48.401723   78080 cri.go:89] found id: ""
	I0729 18:28:48.401746   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.401753   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:48.401758   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:48.401822   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:48.439876   78080 cri.go:89] found id: ""
	I0729 18:28:48.439896   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.439903   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:48.439908   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:48.439956   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:48.473352   78080 cri.go:89] found id: ""
	I0729 18:28:48.473383   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.473394   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:48.473401   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:48.473461   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:48.506752   78080 cri.go:89] found id: ""
	I0729 18:28:48.506779   78080 logs.go:276] 0 containers: []
	W0729 18:28:48.506788   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:48.506799   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:48.506815   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:48.547513   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:48.547535   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:48.599704   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:48.599733   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:48.613577   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:48.613604   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:48.681272   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:48.681290   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:48.681301   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:51.267397   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:51.280243   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:51.280317   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:51.314047   78080 cri.go:89] found id: ""
	I0729 18:28:51.314078   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.314090   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:51.314097   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:51.314162   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:51.346048   78080 cri.go:89] found id: ""
	I0729 18:28:51.346073   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.346080   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:51.346085   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:51.346144   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:51.380511   78080 cri.go:89] found id: ""
	I0729 18:28:51.380543   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.380553   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:51.380561   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:51.380637   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:51.415189   78080 cri.go:89] found id: ""
	I0729 18:28:51.415213   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.415220   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:51.415227   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:51.415310   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:51.454324   78080 cri.go:89] found id: ""
	I0729 18:28:51.454351   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.454380   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:51.454388   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:51.454449   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:51.488737   78080 cri.go:89] found id: ""
	I0729 18:28:51.488768   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.488779   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:51.488787   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:51.488848   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:51.528869   78080 cri.go:89] found id: ""
	I0729 18:28:51.528903   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.528912   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:51.528920   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:51.528972   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:51.566039   78080 cri.go:89] found id: ""
	I0729 18:28:51.566067   78080 logs.go:276] 0 containers: []
	W0729 18:28:51.566075   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:51.566086   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:51.566102   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:51.604746   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:51.604774   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:51.661048   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:51.661089   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:51.675420   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:51.675447   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:51.754496   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:51.754531   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:51.754548   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:47.934575   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:49.935187   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:51.247773   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:53.740386   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:50.043378   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:52.543104   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:54.335796   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:54.350726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:54.350784   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:54.389661   78080 cri.go:89] found id: ""
	I0729 18:28:54.389683   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.389694   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:54.389701   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:54.389761   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:54.427073   78080 cri.go:89] found id: ""
	I0729 18:28:54.427100   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.427110   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:54.427117   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:54.427178   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:54.466761   78080 cri.go:89] found id: ""
	I0729 18:28:54.466793   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.466802   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:54.466808   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:54.466871   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:54.501115   78080 cri.go:89] found id: ""
	I0729 18:28:54.501144   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.501159   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:54.501167   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:54.501229   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:54.535430   78080 cri.go:89] found id: ""
	I0729 18:28:54.535461   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.535472   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:54.535480   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:54.535543   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:54.574994   78080 cri.go:89] found id: ""
	I0729 18:28:54.575024   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.575034   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:54.575041   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:54.575107   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:54.608770   78080 cri.go:89] found id: ""
	I0729 18:28:54.608792   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.608800   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:54.608805   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:54.608850   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:54.648026   78080 cri.go:89] found id: ""
	I0729 18:28:54.648050   78080 logs.go:276] 0 containers: []
	W0729 18:28:54.648057   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:54.648066   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:54.648077   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:54.728445   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:54.728485   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:54.774752   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:54.774781   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:54.826549   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:54.826582   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:28:54.840366   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:54.840394   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:54.907422   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:52.434956   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:54.436125   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:56.933929   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:56.239045   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:58.239967   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:55.041898   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:57.042968   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:59.542837   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:28:57.408469   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:28:57.421855   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:28:57.421923   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:28:57.457794   78080 cri.go:89] found id: ""
	I0729 18:28:57.457816   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.457824   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:28:57.457829   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:28:57.457908   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:28:57.492851   78080 cri.go:89] found id: ""
	I0729 18:28:57.492880   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.492888   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:28:57.492894   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:28:57.492946   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:28:57.528221   78080 cri.go:89] found id: ""
	I0729 18:28:57.528249   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.528258   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:28:57.528265   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:28:57.528330   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:28:57.565504   78080 cri.go:89] found id: ""
	I0729 18:28:57.565536   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.565547   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:28:57.565554   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:28:57.565618   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:28:57.599391   78080 cri.go:89] found id: ""
	I0729 18:28:57.599418   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.599426   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:28:57.599432   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:28:57.599491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:28:57.643757   78080 cri.go:89] found id: ""
	I0729 18:28:57.643784   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.643798   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:28:57.643806   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:28:57.643867   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:28:57.680825   78080 cri.go:89] found id: ""
	I0729 18:28:57.680853   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.680864   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:28:57.680871   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:28:57.680936   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:28:57.714450   78080 cri.go:89] found id: ""
	I0729 18:28:57.714479   78080 logs.go:276] 0 containers: []
	W0729 18:28:57.714490   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:28:57.714500   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:28:57.714516   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:28:57.798411   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:28:57.798437   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:28:57.798453   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:28:57.878210   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:28:57.878246   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:28:57.917476   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:28:57.917505   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:57.971395   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:28:57.971432   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:00.486419   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:00.500625   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:00.500703   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:00.539625   78080 cri.go:89] found id: ""
	I0729 18:29:00.539650   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.539659   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:00.539682   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:00.539737   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:00.577252   78080 cri.go:89] found id: ""
	I0729 18:29:00.577284   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.577297   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:00.577303   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:00.577350   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:00.611850   78080 cri.go:89] found id: ""
	I0729 18:29:00.611878   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.611886   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:00.611892   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:00.611939   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:00.648964   78080 cri.go:89] found id: ""
	I0729 18:29:00.648989   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.648996   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:00.649003   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:00.649062   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:00.686124   78080 cri.go:89] found id: ""
	I0729 18:29:00.686147   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.686156   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:00.686161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:00.686217   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:00.721166   78080 cri.go:89] found id: ""
	I0729 18:29:00.721195   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.721205   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:00.721213   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:00.721276   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:00.758394   78080 cri.go:89] found id: ""
	I0729 18:29:00.758423   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.758431   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:00.758436   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:00.758491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:00.793487   78080 cri.go:89] found id: ""
	I0729 18:29:00.793514   78080 logs.go:276] 0 containers: []
	W0729 18:29:00.793523   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:00.793533   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:00.793549   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:00.807069   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:00.807106   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:00.880611   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:00.880629   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:00.880641   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:00.963534   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:00.963568   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:01.004145   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:01.004174   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:28:58.933964   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:00.934221   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:00.739676   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:02.741020   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:02.042346   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:04.541902   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:03.560985   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:03.574407   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:03.574476   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:03.608027   78080 cri.go:89] found id: ""
	I0729 18:29:03.608048   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.608057   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:03.608062   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:03.608119   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:03.644777   78080 cri.go:89] found id: ""
	I0729 18:29:03.644804   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.644814   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:03.644821   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:03.644895   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:03.684050   78080 cri.go:89] found id: ""
	I0729 18:29:03.684074   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.684082   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:03.684089   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:03.684149   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:03.724350   78080 cri.go:89] found id: ""
	I0729 18:29:03.724376   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.724383   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:03.724390   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:03.724439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:03.766859   78080 cri.go:89] found id: ""
	I0729 18:29:03.766887   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.766898   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:03.766905   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:03.766967   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:03.800535   78080 cri.go:89] found id: ""
	I0729 18:29:03.800562   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.800572   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:03.800579   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:03.800639   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:03.834991   78080 cri.go:89] found id: ""
	I0729 18:29:03.835011   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.835019   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:03.835024   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:03.835073   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:03.869159   78080 cri.go:89] found id: ""
	I0729 18:29:03.869191   78080 logs.go:276] 0 containers: []
	W0729 18:29:03.869201   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:03.869211   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:03.869226   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:03.940451   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:03.940469   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:03.940487   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:04.020880   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:04.020910   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:04.064707   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:04.064728   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:04.121551   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:04.121587   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:06.636983   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:06.651500   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:06.651582   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:06.686556   78080 cri.go:89] found id: ""
	I0729 18:29:06.686582   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.686592   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:06.686599   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:06.686660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:06.721967   78080 cri.go:89] found id: ""
	I0729 18:29:06.721996   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.722008   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:06.722016   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:06.722115   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:06.760409   78080 cri.go:89] found id: ""
	I0729 18:29:06.760433   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.760440   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:06.760445   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:06.760499   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:06.794050   78080 cri.go:89] found id: ""
	I0729 18:29:06.794074   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.794081   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:06.794087   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:06.794143   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:06.826445   78080 cri.go:89] found id: ""
	I0729 18:29:06.826471   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.826478   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:06.826484   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:06.826544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:06.860680   78080 cri.go:89] found id: ""
	I0729 18:29:06.860700   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.860706   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:06.860712   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:06.860761   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:06.898192   78080 cri.go:89] found id: ""
	I0729 18:29:06.898215   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.898223   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:06.898229   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:06.898284   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:06.931892   78080 cri.go:89] found id: ""
	I0729 18:29:06.931920   78080 logs.go:276] 0 containers: []
	W0729 18:29:06.931930   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:06.931940   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:06.931955   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:06.987265   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:06.987294   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:07.043520   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:07.043547   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:07.056995   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:07.057019   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:07.124932   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:07.124956   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:07.124971   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:03.435778   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:05.936004   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:05.239352   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:07.239383   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:06.542526   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:08.543497   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:09.708947   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:09.723497   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:09.723565   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:09.762686   78080 cri.go:89] found id: ""
	I0729 18:29:09.762714   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.762725   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:09.762733   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:09.762797   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:09.799674   78080 cri.go:89] found id: ""
	I0729 18:29:09.799699   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.799708   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:09.799715   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:09.799775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:09.836121   78080 cri.go:89] found id: ""
	I0729 18:29:09.836147   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.836156   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:09.836161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:09.836209   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:09.872758   78080 cri.go:89] found id: ""
	I0729 18:29:09.872783   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.872791   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:09.872797   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:09.872842   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:09.911681   78080 cri.go:89] found id: ""
	I0729 18:29:09.911711   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.911719   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:09.911724   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:09.911773   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:09.951531   78080 cri.go:89] found id: ""
	I0729 18:29:09.951554   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.951561   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:09.951567   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:09.951624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:09.985568   78080 cri.go:89] found id: ""
	I0729 18:29:09.985597   78080 logs.go:276] 0 containers: []
	W0729 18:29:09.985606   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:09.985612   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:09.985661   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:10.020369   78080 cri.go:89] found id: ""
	I0729 18:29:10.020394   78080 logs.go:276] 0 containers: []
	W0729 18:29:10.020402   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:10.020409   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:10.020421   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:10.076538   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:10.076574   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:10.090954   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:10.090980   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:10.165843   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:10.165875   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:10.165890   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:10.242438   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:10.242469   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:08.434575   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:10.934523   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:09.744446   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:12.239540   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:14.242060   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:10.544272   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:13.043064   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:12.781369   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:12.797066   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:12.797160   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:12.832500   78080 cri.go:89] found id: ""
	I0729 18:29:12.832528   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.832545   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:12.832552   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:12.832615   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:12.866390   78080 cri.go:89] found id: ""
	I0729 18:29:12.866420   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.866428   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:12.866434   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:12.866494   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:12.901616   78080 cri.go:89] found id: ""
	I0729 18:29:12.901636   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.901644   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:12.901649   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:12.901713   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:12.935954   78080 cri.go:89] found id: ""
	I0729 18:29:12.935976   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.935985   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:12.935993   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:12.936053   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:12.970570   78080 cri.go:89] found id: ""
	I0729 18:29:12.970623   78080 logs.go:276] 0 containers: []
	W0729 18:29:12.970637   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:12.970645   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:12.970702   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:13.008629   78080 cri.go:89] found id: ""
	I0729 18:29:13.008658   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.008666   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:13.008672   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:13.008725   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:13.045689   78080 cri.go:89] found id: ""
	I0729 18:29:13.045713   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.045721   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:13.045726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:13.045773   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:13.084707   78080 cri.go:89] found id: ""
	I0729 18:29:13.084735   78080 logs.go:276] 0 containers: []
	W0729 18:29:13.084745   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:13.084756   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:13.084774   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:13.161884   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:13.161920   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:13.205377   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:13.205410   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:13.258161   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:13.258189   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:13.272208   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:13.272240   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:13.347519   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:15.848068   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:15.861773   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:15.861851   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:15.902421   78080 cri.go:89] found id: ""
	I0729 18:29:15.902449   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.902458   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:15.902466   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:15.902532   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:15.939552   78080 cri.go:89] found id: ""
	I0729 18:29:15.939576   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.939583   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:15.939588   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:15.939645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:15.974424   78080 cri.go:89] found id: ""
	I0729 18:29:15.974454   78080 logs.go:276] 0 containers: []
	W0729 18:29:15.974463   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:15.974468   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:15.974516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:16.010955   78080 cri.go:89] found id: ""
	I0729 18:29:16.010993   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.011000   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:16.011006   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:16.011062   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:16.046785   78080 cri.go:89] found id: ""
	I0729 18:29:16.046815   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.046825   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:16.046832   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:16.046887   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:16.082691   78080 cri.go:89] found id: ""
	I0729 18:29:16.082721   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.082731   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:16.082739   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:16.082796   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:16.127633   78080 cri.go:89] found id: ""
	I0729 18:29:16.127663   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.127676   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:16.127684   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:16.127741   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:16.162641   78080 cri.go:89] found id: ""
	I0729 18:29:16.162662   78080 logs.go:276] 0 containers: []
	W0729 18:29:16.162670   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:16.162684   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:16.162695   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:16.215132   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:16.215162   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:16.229581   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:16.229607   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:16.303178   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:16.303198   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:16.303212   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:16.383739   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:16.383775   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:12.934751   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:14.934965   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:16.739047   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:18.739145   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:15.043163   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:17.544340   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:18.924292   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:18.937571   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:18.937626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:18.970523   78080 cri.go:89] found id: ""
	I0729 18:29:18.970554   78080 logs.go:276] 0 containers: []
	W0729 18:29:18.970563   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:18.970568   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:18.970624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:19.005448   78080 cri.go:89] found id: ""
	I0729 18:29:19.005471   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.005478   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:19.005483   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:19.005538   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:19.044352   78080 cri.go:89] found id: ""
	I0729 18:29:19.044377   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.044386   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:19.044393   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:19.044448   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:19.079288   78080 cri.go:89] found id: ""
	I0729 18:29:19.079317   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.079327   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:19.079333   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:19.079402   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:19.122932   78080 cri.go:89] found id: ""
	I0729 18:29:19.122954   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.122961   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:19.122967   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:19.123020   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:19.166992   78080 cri.go:89] found id: ""
	I0729 18:29:19.167018   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.167025   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:19.167031   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:19.167103   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:19.215301   78080 cri.go:89] found id: ""
	I0729 18:29:19.215331   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.215341   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:19.215355   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:19.215419   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:19.267635   78080 cri.go:89] found id: ""
	I0729 18:29:19.267657   78080 logs.go:276] 0 containers: []
	W0729 18:29:19.267664   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:19.267671   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:19.267682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:19.319924   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:19.319962   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:19.333987   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:19.334010   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:19.406541   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:19.406558   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:19.406571   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:19.487388   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:19.487426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:22.027745   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:22.041145   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:22.041218   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:22.080000   78080 cri.go:89] found id: ""
	I0729 18:29:22.080022   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.080029   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:22.080034   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:22.080079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:22.116385   78080 cri.go:89] found id: ""
	I0729 18:29:22.116415   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.116425   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:22.116431   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:22.116492   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:22.150530   78080 cri.go:89] found id: ""
	I0729 18:29:22.150552   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.150559   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:22.150565   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:22.150621   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:22.188782   78080 cri.go:89] found id: ""
	I0729 18:29:22.188808   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.188817   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:22.188822   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:22.188873   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:17.434007   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:19.434864   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:21.935573   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:20.739852   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:23.239853   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:20.044010   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:22.542952   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:24.543614   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:22.227117   78080 cri.go:89] found id: ""
	I0729 18:29:22.227152   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.227162   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:22.227169   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:22.227234   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:22.263057   78080 cri.go:89] found id: ""
	I0729 18:29:22.263079   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.263086   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:22.263091   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:22.263145   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:22.297368   78080 cri.go:89] found id: ""
	I0729 18:29:22.297391   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.297399   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:22.297406   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:22.297466   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:22.334117   78080 cri.go:89] found id: ""
	I0729 18:29:22.334149   78080 logs.go:276] 0 containers: []
	W0729 18:29:22.334159   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:22.334170   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:22.334184   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:22.349344   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:22.349369   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:22.415720   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:22.415743   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:22.415758   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:22.494937   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:22.494971   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:22.536352   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:22.536382   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:25.087795   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:25.103985   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:25.104050   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:25.158532   78080 cri.go:89] found id: ""
	I0729 18:29:25.158562   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.158572   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:25.158580   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:25.158641   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:25.216740   78080 cri.go:89] found id: ""
	I0729 18:29:25.216762   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.216769   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:25.216775   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:25.216827   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:25.254827   78080 cri.go:89] found id: ""
	I0729 18:29:25.254855   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.254865   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:25.254872   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:25.254934   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:25.289377   78080 cri.go:89] found id: ""
	I0729 18:29:25.289407   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.289417   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:25.289424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:25.289484   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:25.328111   78080 cri.go:89] found id: ""
	I0729 18:29:25.328144   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.328153   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:25.328161   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:25.328224   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:25.364779   78080 cri.go:89] found id: ""
	I0729 18:29:25.364808   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.364815   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:25.364827   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:25.364874   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:25.402906   78080 cri.go:89] found id: ""
	I0729 18:29:25.402935   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.402942   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:25.402948   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:25.403007   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:25.438747   78080 cri.go:89] found id: ""
	I0729 18:29:25.438770   78080 logs.go:276] 0 containers: []
	W0729 18:29:25.438778   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:25.438787   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:25.438803   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:25.452803   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:25.452829   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:25.527575   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:25.527593   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:25.527610   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:25.622437   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:25.622482   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:25.661451   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:25.661478   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:23.936249   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:26.434496   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:25.739358   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:27.739702   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:27.043125   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:29.542130   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:28.213898   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:28.230013   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:28.230071   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:28.265484   78080 cri.go:89] found id: ""
	I0729 18:29:28.265511   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.265521   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:28.265530   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:28.265594   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:28.306374   78080 cri.go:89] found id: ""
	I0729 18:29:28.306428   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.306441   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:28.306448   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:28.306501   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:28.340274   78080 cri.go:89] found id: ""
	I0729 18:29:28.340299   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.340309   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:28.340316   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:28.340379   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:28.373928   78080 cri.go:89] found id: ""
	I0729 18:29:28.373973   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.373982   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:28.373990   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:28.374052   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:28.407075   78080 cri.go:89] found id: ""
	I0729 18:29:28.407107   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.407120   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:28.407129   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:28.407215   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:28.444501   78080 cri.go:89] found id: ""
	I0729 18:29:28.444528   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.444536   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:28.444543   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:28.444614   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:28.487513   78080 cri.go:89] found id: ""
	I0729 18:29:28.487540   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.487548   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:28.487554   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:28.487611   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:28.521957   78080 cri.go:89] found id: ""
	I0729 18:29:28.521990   78080 logs.go:276] 0 containers: []
	W0729 18:29:28.522000   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:28.522011   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:28.522027   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:28.536880   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:28.536918   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:28.609486   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:28.609513   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:28.609528   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:28.694086   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:28.694125   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:28.733930   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:28.733964   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:31.292260   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:31.305840   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:31.305899   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:31.342510   78080 cri.go:89] found id: ""
	I0729 18:29:31.342539   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.342550   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:31.342557   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:31.342613   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:31.375093   78080 cri.go:89] found id: ""
	I0729 18:29:31.375118   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.375128   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:31.375135   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:31.375198   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:31.408554   78080 cri.go:89] found id: ""
	I0729 18:29:31.408576   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.408583   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:31.408588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:31.408660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:31.448748   78080 cri.go:89] found id: ""
	I0729 18:29:31.448774   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.448783   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:31.448796   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:31.448855   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:31.483541   78080 cri.go:89] found id: ""
	I0729 18:29:31.483564   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.483572   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:31.483578   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:31.483637   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:31.518173   78080 cri.go:89] found id: ""
	I0729 18:29:31.518198   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.518209   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:31.518217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:31.518279   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:31.553345   78080 cri.go:89] found id: ""
	I0729 18:29:31.553371   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.553379   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:31.553384   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:31.553439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:31.591857   78080 cri.go:89] found id: ""
	I0729 18:29:31.591887   78080 logs.go:276] 0 containers: []
	W0729 18:29:31.591905   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:31.591916   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:31.591929   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:31.648404   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:31.648436   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:31.661455   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:31.661477   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:31.732978   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:31.732997   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:31.733009   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:31.812105   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:31.812145   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:28.435517   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:30.436822   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:30.239755   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:32.739231   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.739534   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:31.542847   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:33.543096   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.353079   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:34.366759   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:34.366817   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:34.400944   78080 cri.go:89] found id: ""
	I0729 18:29:34.400974   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.400984   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:34.400991   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:34.401055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:34.439348   78080 cri.go:89] found id: ""
	I0729 18:29:34.439373   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.439383   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:34.439395   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:34.439444   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:34.473969   78080 cri.go:89] found id: ""
	I0729 18:29:34.473991   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.474010   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:34.474017   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:34.474080   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:34.507741   78080 cri.go:89] found id: ""
	I0729 18:29:34.507770   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.507778   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:34.507784   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:34.507845   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:34.543794   78080 cri.go:89] found id: ""
	I0729 18:29:34.543815   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.543823   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:34.543830   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:34.543895   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:34.577893   78080 cri.go:89] found id: ""
	I0729 18:29:34.577918   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.577926   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:34.577931   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:34.577978   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:34.612703   78080 cri.go:89] found id: ""
	I0729 18:29:34.612735   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.612745   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:34.612752   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:34.612815   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:34.648167   78080 cri.go:89] found id: ""
	I0729 18:29:34.648197   78080 logs.go:276] 0 containers: []
	W0729 18:29:34.648209   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:34.648219   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:34.648233   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:34.689821   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:34.689848   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:34.743902   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:34.743935   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:34.757400   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:34.757426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:34.833684   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:34.833706   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:34.833721   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:32.934207   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:34.936549   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:37.238618   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:39.239761   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:36.042461   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:38.543304   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:37.419270   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:37.433249   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:37.433301   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:37.469991   78080 cri.go:89] found id: ""
	I0729 18:29:37.470021   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.470031   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:37.470038   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:37.470098   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:37.504511   78080 cri.go:89] found id: ""
	I0729 18:29:37.504537   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.504548   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:37.504554   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:37.504612   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:37.545304   78080 cri.go:89] found id: ""
	I0729 18:29:37.545332   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.545342   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:37.545349   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:37.545406   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:37.584255   78080 cri.go:89] found id: ""
	I0729 18:29:37.584280   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.584287   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:37.584292   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:37.584345   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:37.620917   78080 cri.go:89] found id: ""
	I0729 18:29:37.620943   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.620951   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:37.620958   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:37.621022   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:37.659381   78080 cri.go:89] found id: ""
	I0729 18:29:37.659405   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.659414   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:37.659419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:37.659486   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:37.701337   78080 cri.go:89] found id: ""
	I0729 18:29:37.701360   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.701368   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:37.701373   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:37.701426   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:37.737142   78080 cri.go:89] found id: ""
	I0729 18:29:37.737168   78080 logs.go:276] 0 containers: []
	W0729 18:29:37.737177   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:37.737186   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:37.737201   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:37.789951   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:37.789992   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:37.804759   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:37.804784   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:37.881777   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:37.881794   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:37.881808   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:37.970593   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:37.970625   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:40.511557   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:40.525472   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:40.525527   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:40.564227   78080 cri.go:89] found id: ""
	I0729 18:29:40.564253   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.564263   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:40.564270   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:40.564336   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:40.600384   78080 cri.go:89] found id: ""
	I0729 18:29:40.600409   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.600417   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:40.600423   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:40.600475   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:40.634819   78080 cri.go:89] found id: ""
	I0729 18:29:40.634843   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.634858   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:40.634866   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:40.634913   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:40.669963   78080 cri.go:89] found id: ""
	I0729 18:29:40.669991   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.669999   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:40.670006   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:40.670069   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:40.705680   78080 cri.go:89] found id: ""
	I0729 18:29:40.705705   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.705714   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:40.705719   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:40.705775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:40.743691   78080 cri.go:89] found id: ""
	I0729 18:29:40.743715   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.743725   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:40.743732   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:40.743820   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:40.783858   78080 cri.go:89] found id: ""
	I0729 18:29:40.783889   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.783898   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:40.783903   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:40.783953   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:40.821499   78080 cri.go:89] found id: ""
	I0729 18:29:40.821527   78080 logs.go:276] 0 containers: []
	W0729 18:29:40.821537   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:40.821547   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:40.821562   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:40.874941   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:40.874972   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:40.888034   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:40.888057   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:40.960013   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:40.960032   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:40.960044   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:41.043013   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:41.043042   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:37.435119   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:39.435967   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:41.934232   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:41.739070   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.739497   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:40.543453   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.042528   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:43.583555   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:43.597120   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:43.597193   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:43.631500   78080 cri.go:89] found id: ""
	I0729 18:29:43.631526   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.631535   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:43.631542   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:43.631607   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:43.667003   78080 cri.go:89] found id: ""
	I0729 18:29:43.667029   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.667037   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:43.667042   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:43.667102   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:43.701471   78080 cri.go:89] found id: ""
	I0729 18:29:43.701502   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.701510   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:43.701515   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:43.701569   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:43.740037   78080 cri.go:89] found id: ""
	I0729 18:29:43.740058   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.740067   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:43.740074   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:43.740145   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:43.772584   78080 cri.go:89] found id: ""
	I0729 18:29:43.772610   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.772620   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:43.772626   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:43.772689   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:43.806340   78080 cri.go:89] found id: ""
	I0729 18:29:43.806382   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.806393   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:43.806401   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:43.806480   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:43.840085   78080 cri.go:89] found id: ""
	I0729 18:29:43.840109   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.840118   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:43.840133   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:43.840198   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:43.873412   78080 cri.go:89] found id: ""
	I0729 18:29:43.873438   78080 logs.go:276] 0 containers: []
	W0729 18:29:43.873448   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:43.873458   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:43.873473   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:43.928762   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:43.928790   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:43.944129   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:43.944156   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:44.017330   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:44.017349   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:44.017361   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:44.106858   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:44.106915   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:46.651050   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:46.665253   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:46.665310   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:46.698846   78080 cri.go:89] found id: ""
	I0729 18:29:46.698871   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.698881   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:46.698888   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:46.698956   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:46.734354   78080 cri.go:89] found id: ""
	I0729 18:29:46.734395   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.734405   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:46.734413   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:46.734468   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:46.771978   78080 cri.go:89] found id: ""
	I0729 18:29:46.771999   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.772007   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:46.772012   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:46.772059   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:46.807231   78080 cri.go:89] found id: ""
	I0729 18:29:46.807255   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.807263   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:46.807272   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:46.807329   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:46.842257   78080 cri.go:89] found id: ""
	I0729 18:29:46.842278   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.842306   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:46.842312   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:46.842373   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:46.876287   78080 cri.go:89] found id: ""
	I0729 18:29:46.876309   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.876317   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:46.876323   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:46.876389   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:46.909695   78080 cri.go:89] found id: ""
	I0729 18:29:46.909719   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.909726   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:46.909731   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:46.909806   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:46.951768   78080 cri.go:89] found id: ""
	I0729 18:29:46.951798   78080 logs.go:276] 0 containers: []
	W0729 18:29:46.951807   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:46.951815   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:46.951825   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:47.025467   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:47.025485   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:47.025497   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:47.106336   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:47.106391   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:47.145652   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:47.145682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:47.200857   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:47.200886   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:43.935210   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:46.434346   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:45.739606   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:48.240282   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:45.544442   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:48.042872   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:49.715401   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:49.729703   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:49.729776   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:49.770016   78080 cri.go:89] found id: ""
	I0729 18:29:49.770039   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.770062   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:49.770070   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:49.770127   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:49.805464   78080 cri.go:89] found id: ""
	I0729 18:29:49.805487   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.805495   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:49.805500   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:49.805560   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:49.838739   78080 cri.go:89] found id: ""
	I0729 18:29:49.838770   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.838782   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:49.838789   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:49.838861   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:49.881168   78080 cri.go:89] found id: ""
	I0729 18:29:49.881194   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.881202   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:49.881208   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:49.881269   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:49.919978   78080 cri.go:89] found id: ""
	I0729 18:29:49.919999   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.920006   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:49.920012   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:49.920079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:49.958971   78080 cri.go:89] found id: ""
	I0729 18:29:49.958996   78080 logs.go:276] 0 containers: []
	W0729 18:29:49.959006   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:49.959013   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:49.959063   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:50.001253   78080 cri.go:89] found id: ""
	I0729 18:29:50.001281   78080 logs.go:276] 0 containers: []
	W0729 18:29:50.001291   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:50.001298   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:50.001362   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:50.038729   78080 cri.go:89] found id: ""
	I0729 18:29:50.038755   78080 logs.go:276] 0 containers: []
	W0729 18:29:50.038766   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:50.038776   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:50.038789   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:50.082540   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:50.082567   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:50.132372   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:50.132413   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:50.146806   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:50.146835   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:50.214495   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:50.214515   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:50.214532   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:48.435540   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.935475   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.240626   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.739158   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:50.044073   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.047924   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:54.542657   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:52.793987   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:52.808085   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:52.808149   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:52.844869   78080 cri.go:89] found id: ""
	I0729 18:29:52.844904   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.844917   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:52.844925   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:52.844986   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:52.878097   78080 cri.go:89] found id: ""
	I0729 18:29:52.878122   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.878135   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:52.878142   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:52.878191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:52.910843   78080 cri.go:89] found id: ""
	I0729 18:29:52.910884   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.910894   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:52.910902   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:52.910953   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:52.943233   78080 cri.go:89] found id: ""
	I0729 18:29:52.943257   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.943267   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:52.943274   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:52.943335   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:52.978354   78080 cri.go:89] found id: ""
	I0729 18:29:52.978402   78080 logs.go:276] 0 containers: []
	W0729 18:29:52.978413   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:52.978423   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:52.978503   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:53.011238   78080 cri.go:89] found id: ""
	I0729 18:29:53.011266   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.011276   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:53.011283   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:53.011336   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:53.048787   78080 cri.go:89] found id: ""
	I0729 18:29:53.048817   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.048827   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:53.048834   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:53.048900   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:53.086108   78080 cri.go:89] found id: ""
	I0729 18:29:53.086135   78080 logs.go:276] 0 containers: []
	W0729 18:29:53.086156   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:53.086176   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:53.086195   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:53.137552   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:53.137580   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:53.151308   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:53.151333   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:53.225968   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:53.225992   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:53.226004   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:53.308111   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:53.308145   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:55.850207   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:55.864003   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:55.864054   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:55.898109   78080 cri.go:89] found id: ""
	I0729 18:29:55.898134   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.898142   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:55.898148   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:55.898201   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:55.931616   78080 cri.go:89] found id: ""
	I0729 18:29:55.931643   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.931653   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:55.931660   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:55.931719   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:55.969034   78080 cri.go:89] found id: ""
	I0729 18:29:55.969063   78080 logs.go:276] 0 containers: []
	W0729 18:29:55.969073   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:55.969080   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:55.969142   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:56.007552   78080 cri.go:89] found id: ""
	I0729 18:29:56.007576   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.007586   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:56.007592   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:56.007653   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:56.044342   78080 cri.go:89] found id: ""
	I0729 18:29:56.044367   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.044376   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:56.044382   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:56.044437   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:56.078352   78080 cri.go:89] found id: ""
	I0729 18:29:56.078396   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.078412   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:56.078420   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:56.078471   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:56.116505   78080 cri.go:89] found id: ""
	I0729 18:29:56.116532   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.116543   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:56.116551   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:56.116611   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:56.151493   78080 cri.go:89] found id: ""
	I0729 18:29:56.151516   78080 logs.go:276] 0 containers: []
	W0729 18:29:56.151523   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:56.151530   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:56.151542   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:29:56.206170   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:56.206198   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:56.219658   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:56.219684   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:56.290279   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:56.290300   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:56.290312   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:56.371352   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:56.371382   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:53.434046   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:55.435343   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:55.239055   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:57.241032   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.740003   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:57.041745   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.042416   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:58.908793   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:29:58.922566   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:29:58.922626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:29:58.959375   78080 cri.go:89] found id: ""
	I0729 18:29:58.959397   78080 logs.go:276] 0 containers: []
	W0729 18:29:58.959404   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:29:58.959410   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:29:58.959459   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:29:58.993235   78080 cri.go:89] found id: ""
	I0729 18:29:58.993257   78080 logs.go:276] 0 containers: []
	W0729 18:29:58.993265   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:29:58.993271   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:29:58.993331   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:29:59.028186   78080 cri.go:89] found id: ""
	I0729 18:29:59.028212   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.028220   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:29:59.028225   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:29:59.028271   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:29:59.063589   78080 cri.go:89] found id: ""
	I0729 18:29:59.063619   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.063628   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:29:59.063635   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:29:59.063695   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:29:59.101116   78080 cri.go:89] found id: ""
	I0729 18:29:59.101142   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.101152   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:29:59.101158   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:29:59.101208   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:59.135288   78080 cri.go:89] found id: ""
	I0729 18:29:59.135314   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.135324   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:29:59.135332   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:29:59.135395   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:29:59.170520   78080 cri.go:89] found id: ""
	I0729 18:29:59.170549   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.170557   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:29:59.170562   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:29:59.170618   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:29:59.229796   78080 cri.go:89] found id: ""
	I0729 18:29:59.229825   78080 logs.go:276] 0 containers: []
	W0729 18:29:59.229835   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:29:59.229843   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:29:59.229871   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:29:59.244654   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:29:59.244682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:29:59.321262   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:29:59.321286   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:29:59.321301   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:29:59.401423   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:29:59.401459   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:29:59.442916   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:29:59.442938   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:01.995116   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:02.008454   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:02.008516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:02.046412   78080 cri.go:89] found id: ""
	I0729 18:30:02.046431   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.046438   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:02.046443   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:02.046487   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:02.082444   78080 cri.go:89] found id: ""
	I0729 18:30:02.082466   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.082476   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:02.082482   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:02.082551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:02.116013   78080 cri.go:89] found id: ""
	I0729 18:30:02.116041   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.116052   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:02.116058   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:02.116127   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:02.155817   78080 cri.go:89] found id: ""
	I0729 18:30:02.155844   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.155854   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:02.155862   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:02.155914   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:02.195518   78080 cri.go:89] found id: ""
	I0729 18:30:02.195548   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.195556   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:02.195563   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:02.195624   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:29:57.934058   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:29:59.934547   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.935238   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.742050   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:04.239758   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:01.043550   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:03.542544   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:02.228248   78080 cri.go:89] found id: ""
	I0729 18:30:02.228274   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.228283   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:02.228289   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:02.228370   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:02.262441   78080 cri.go:89] found id: ""
	I0729 18:30:02.262469   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.262479   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:02.262486   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:02.262546   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:02.296900   78080 cri.go:89] found id: ""
	I0729 18:30:02.296930   78080 logs.go:276] 0 containers: []
	W0729 18:30:02.296937   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:02.296953   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:02.296965   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:02.352356   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:02.352389   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:02.366336   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:02.366365   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:02.441367   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:02.441389   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:02.441403   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:02.524134   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:02.524173   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:05.071581   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:05.085481   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:05.085535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:05.121610   78080 cri.go:89] found id: ""
	I0729 18:30:05.121636   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.121644   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:05.121652   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:05.121716   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:05.157382   78080 cri.go:89] found id: ""
	I0729 18:30:05.157406   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.157413   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:05.157418   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:05.157478   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:05.195552   78080 cri.go:89] found id: ""
	I0729 18:30:05.195582   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.195593   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:05.195600   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:05.195657   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:05.231071   78080 cri.go:89] found id: ""
	I0729 18:30:05.231095   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.231103   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:05.231108   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:05.231165   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:05.267445   78080 cri.go:89] found id: ""
	I0729 18:30:05.267474   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.267485   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:05.267493   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:05.267555   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:05.304258   78080 cri.go:89] found id: ""
	I0729 18:30:05.304279   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.304286   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:05.304291   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:05.304338   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:05.339155   78080 cri.go:89] found id: ""
	I0729 18:30:05.339176   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.339184   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:05.339190   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:05.339243   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:05.375291   78080 cri.go:89] found id: ""
	I0729 18:30:05.375328   78080 logs.go:276] 0 containers: []
	W0729 18:30:05.375337   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:05.375346   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:05.375361   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:05.446196   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:05.446221   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:05.446236   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:05.529421   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:05.529457   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:05.570234   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:05.570269   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:05.629349   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:05.629391   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:04.434625   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:06.934246   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:06.239886   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.242421   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:05.543394   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.042242   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:08.151320   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:08.165983   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:08.166045   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:08.205703   78080 cri.go:89] found id: ""
	I0729 18:30:08.205726   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.205733   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:08.205738   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:08.205786   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:08.245919   78080 cri.go:89] found id: ""
	I0729 18:30:08.245946   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.245957   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:08.245964   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:08.246024   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:08.286595   78080 cri.go:89] found id: ""
	I0729 18:30:08.286621   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.286631   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:08.286638   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:08.286700   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:08.330032   78080 cri.go:89] found id: ""
	I0729 18:30:08.330060   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.330070   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:08.330077   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:08.330140   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:08.362535   78080 cri.go:89] found id: ""
	I0729 18:30:08.362567   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.362578   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:08.362586   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:08.362645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:08.397648   78080 cri.go:89] found id: ""
	I0729 18:30:08.397678   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.397688   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:08.397704   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:08.397766   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:08.433615   78080 cri.go:89] found id: ""
	I0729 18:30:08.433693   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.433716   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:08.433734   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:08.433809   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:08.465765   78080 cri.go:89] found id: ""
	I0729 18:30:08.465792   78080 logs.go:276] 0 containers: []
	W0729 18:30:08.465803   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:08.465814   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:08.465829   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:08.536332   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:08.536360   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:08.536375   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:08.613737   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:08.613776   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:08.659707   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:08.659736   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:08.712702   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:08.712736   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:11.226660   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:11.240852   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:11.240919   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:11.277632   78080 cri.go:89] found id: ""
	I0729 18:30:11.277664   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.277675   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:11.277682   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:11.277751   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:11.312458   78080 cri.go:89] found id: ""
	I0729 18:30:11.312478   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.312485   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:11.312491   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:11.312551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:11.350375   78080 cri.go:89] found id: ""
	I0729 18:30:11.350406   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.350416   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:11.350424   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:11.350486   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:11.389280   78080 cri.go:89] found id: ""
	I0729 18:30:11.389307   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.389317   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:11.389324   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:11.389382   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:11.424907   78080 cri.go:89] found id: ""
	I0729 18:30:11.424936   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.424944   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:11.424949   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:11.425009   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:11.480686   78080 cri.go:89] found id: ""
	I0729 18:30:11.480713   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.480720   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:11.480726   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:11.480778   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:11.514831   78080 cri.go:89] found id: ""
	I0729 18:30:11.514857   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.514864   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:11.514870   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:11.514917   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:11.547930   78080 cri.go:89] found id: ""
	I0729 18:30:11.547955   78080 logs.go:276] 0 containers: []
	W0729 18:30:11.547964   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:11.547974   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:11.547989   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:11.586068   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:11.586098   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:11.646857   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:11.646892   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:11.663549   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:11.663576   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:11.731362   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:11.731383   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:11.731397   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:08.934638   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:11.434765   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:10.738608   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:12.740637   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:10.042514   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:12.042731   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.042952   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.315531   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:14.330485   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:14.330544   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:14.363403   78080 cri.go:89] found id: ""
	I0729 18:30:14.363433   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.363444   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:14.363451   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:14.363516   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:14.401204   78080 cri.go:89] found id: ""
	I0729 18:30:14.401227   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.401234   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:14.401240   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:14.401301   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:14.436737   78080 cri.go:89] found id: ""
	I0729 18:30:14.436765   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.436775   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:14.436782   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:14.436844   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:14.471376   78080 cri.go:89] found id: ""
	I0729 18:30:14.471403   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.471411   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:14.471419   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:14.471478   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:14.506883   78080 cri.go:89] found id: ""
	I0729 18:30:14.506914   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.506925   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:14.506932   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:14.506990   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:14.546444   78080 cri.go:89] found id: ""
	I0729 18:30:14.546469   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.546479   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:14.546486   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:14.546552   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:14.580282   78080 cri.go:89] found id: ""
	I0729 18:30:14.580313   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.580320   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:14.580326   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:14.580387   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:14.614185   78080 cri.go:89] found id: ""
	I0729 18:30:14.614210   78080 logs.go:276] 0 containers: []
	W0729 18:30:14.614220   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:14.614231   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:14.614246   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:14.652588   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:14.652610   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:14.706056   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:14.706090   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:14.719332   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:14.719356   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:14.792087   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:14.792115   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:14.792136   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:13.934967   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:16.435238   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:14.740676   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:17.239466   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:19.239656   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:16.541564   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:18.547053   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:17.375639   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:17.389473   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:17.389535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:17.424485   78080 cri.go:89] found id: ""
	I0729 18:30:17.424513   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.424521   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:17.424527   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:17.424572   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:17.461100   78080 cri.go:89] found id: ""
	I0729 18:30:17.461129   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.461136   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:17.461141   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:17.461191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:17.494866   78080 cri.go:89] found id: ""
	I0729 18:30:17.494894   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.494902   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:17.494907   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:17.494983   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:17.529897   78080 cri.go:89] found id: ""
	I0729 18:30:17.529924   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.529934   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:17.529940   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:17.530002   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:17.569870   78080 cri.go:89] found id: ""
	I0729 18:30:17.569897   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.569905   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:17.569910   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:17.569958   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:17.605324   78080 cri.go:89] found id: ""
	I0729 18:30:17.605364   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.605384   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:17.605392   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:17.605457   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:17.640552   78080 cri.go:89] found id: ""
	I0729 18:30:17.640583   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.640595   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:17.640602   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:17.640668   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:17.679769   78080 cri.go:89] found id: ""
	I0729 18:30:17.679800   78080 logs.go:276] 0 containers: []
	W0729 18:30:17.679808   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:17.679827   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:17.679843   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:17.757782   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:17.757814   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:17.803850   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:17.803878   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:17.857987   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:17.858017   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:17.871062   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:17.871086   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:17.940456   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:20.441171   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:20.454752   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:20.454824   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:20.490744   78080 cri.go:89] found id: ""
	I0729 18:30:20.490773   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.490783   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:20.490791   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:20.490853   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:20.524406   78080 cri.go:89] found id: ""
	I0729 18:30:20.524437   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.524448   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:20.524463   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:20.524515   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:20.559225   78080 cri.go:89] found id: ""
	I0729 18:30:20.559257   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.559268   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:20.559275   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:20.559337   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:20.595297   78080 cri.go:89] found id: ""
	I0729 18:30:20.595324   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.595355   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:20.595364   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:20.595436   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:20.632176   78080 cri.go:89] found id: ""
	I0729 18:30:20.632204   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.632215   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:20.632222   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:20.632282   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:20.676600   78080 cri.go:89] found id: ""
	I0729 18:30:20.676625   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.676632   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:20.676638   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:20.676734   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:20.717920   78080 cri.go:89] found id: ""
	I0729 18:30:20.717945   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.717955   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:20.717966   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:20.718021   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:20.756217   78080 cri.go:89] found id: ""
	I0729 18:30:20.756243   78080 logs.go:276] 0 containers: []
	W0729 18:30:20.756253   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:20.756262   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:20.756277   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:20.837150   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:20.837189   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:20.876023   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:20.876050   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:20.932402   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:20.932429   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:20.947422   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:20.947454   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:21.022698   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:18.934790   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.434992   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.242999   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.739073   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:21.042689   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.042794   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:23.523141   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:23.538019   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:23.538098   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:23.576953   78080 cri.go:89] found id: ""
	I0729 18:30:23.576979   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.576991   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:23.576998   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:23.577060   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:23.613052   78080 cri.go:89] found id: ""
	I0729 18:30:23.613083   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.613094   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:23.613100   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:23.613170   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:23.648694   78080 cri.go:89] found id: ""
	I0729 18:30:23.648717   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.648725   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:23.648730   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:23.648775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:23.680939   78080 cri.go:89] found id: ""
	I0729 18:30:23.680965   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.680972   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:23.680977   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:23.681032   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:23.716529   78080 cri.go:89] found id: ""
	I0729 18:30:23.716556   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.716564   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:23.716569   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:23.716628   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:23.756833   78080 cri.go:89] found id: ""
	I0729 18:30:23.756860   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.756868   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:23.756873   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:23.756918   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:23.796436   78080 cri.go:89] found id: ""
	I0729 18:30:23.796460   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.796467   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:23.796472   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:23.796519   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:23.839877   78080 cri.go:89] found id: ""
	I0729 18:30:23.839906   78080 logs.go:276] 0 containers: []
	W0729 18:30:23.839914   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:23.839922   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:23.839934   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:23.879423   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:23.879447   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:23.928379   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:23.928408   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:23.942639   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:23.942669   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:24.014068   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:24.014095   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:24.014110   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:26.597923   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:26.610877   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:26.610945   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:26.647550   78080 cri.go:89] found id: ""
	I0729 18:30:26.647579   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.647590   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:26.647598   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:26.647655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:26.681552   78080 cri.go:89] found id: ""
	I0729 18:30:26.681581   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.681589   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:26.681595   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:26.681660   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:26.714475   78080 cri.go:89] found id: ""
	I0729 18:30:26.714503   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.714513   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:26.714519   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:26.714588   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:26.748671   78080 cri.go:89] found id: ""
	I0729 18:30:26.748697   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.748707   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:26.748714   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:26.748775   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:26.781380   78080 cri.go:89] found id: ""
	I0729 18:30:26.781406   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.781421   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:26.781429   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:26.781483   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:26.815201   78080 cri.go:89] found id: ""
	I0729 18:30:26.815230   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.815243   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:26.815251   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:26.815318   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:26.848600   78080 cri.go:89] found id: ""
	I0729 18:30:26.848628   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.848637   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:26.848644   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:26.848724   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:26.883828   78080 cri.go:89] found id: ""
	I0729 18:30:26.883872   78080 logs.go:276] 0 containers: []
	W0729 18:30:26.883883   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:26.883893   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:26.883908   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:26.936955   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:26.936987   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:26.952212   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:26.952238   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:27.019389   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:27.019413   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:27.019426   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:27.095654   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:27.095682   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:23.935397   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:26.435231   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:26.238749   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:28.239699   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:25.044320   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:27.542022   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:29.542274   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:29.637269   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:29.652138   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:29.652211   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:29.691063   78080 cri.go:89] found id: ""
	I0729 18:30:29.691094   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.691104   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:29.691111   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:29.691173   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:29.725188   78080 cri.go:89] found id: ""
	I0729 18:30:29.725224   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.725232   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:29.725240   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:29.725308   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:29.764118   78080 cri.go:89] found id: ""
	I0729 18:30:29.764149   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.764159   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:29.764167   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:29.764232   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:29.797884   78080 cri.go:89] found id: ""
	I0729 18:30:29.797909   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.797919   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:29.797927   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:29.797989   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:29.838784   78080 cri.go:89] found id: ""
	I0729 18:30:29.838808   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.838815   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:29.838821   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:29.838885   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:29.872394   78080 cri.go:89] found id: ""
	I0729 18:30:29.872420   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.872427   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:29.872433   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:29.872491   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:29.908966   78080 cri.go:89] found id: ""
	I0729 18:30:29.908995   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.909012   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:29.909020   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:29.909081   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:29.946322   78080 cri.go:89] found id: ""
	I0729 18:30:29.946344   78080 logs.go:276] 0 containers: []
	W0729 18:30:29.946352   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:29.946371   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:29.946386   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:30.019133   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:30.019166   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:30.019179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:30.096499   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:30.096532   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:30.136487   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:30.136519   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:30.187341   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:30.187374   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:28.435472   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:30.934817   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:30.739101   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.742029   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.042850   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:34.042919   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:32.703546   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:32.716981   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:32.717042   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:32.753275   78080 cri.go:89] found id: ""
	I0729 18:30:32.753307   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.753318   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:32.753326   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:32.753393   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:32.789075   78080 cri.go:89] found id: ""
	I0729 18:30:32.789105   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.789116   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:32.789123   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:32.789185   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:32.822945   78080 cri.go:89] found id: ""
	I0729 18:30:32.822971   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.822979   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:32.822984   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:32.823033   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:32.856523   78080 cri.go:89] found id: ""
	I0729 18:30:32.856577   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.856589   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:32.856597   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:32.856661   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:32.895768   78080 cri.go:89] found id: ""
	I0729 18:30:32.895798   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.895810   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:32.895817   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:32.895876   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:32.934990   78080 cri.go:89] found id: ""
	I0729 18:30:32.935030   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.935042   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:32.935054   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:32.935132   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:32.970924   78080 cri.go:89] found id: ""
	I0729 18:30:32.970949   78080 logs.go:276] 0 containers: []
	W0729 18:30:32.970957   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:32.970964   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:32.971022   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:33.004133   78080 cri.go:89] found id: ""
	I0729 18:30:33.004164   78080 logs.go:276] 0 containers: []
	W0729 18:30:33.004173   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:33.004182   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:33.004202   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:33.043432   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:33.043467   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:33.095517   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:33.095554   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:33.108859   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:33.108889   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:33.180661   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:33.180681   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:33.180696   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:35.763324   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:35.777060   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:35.777138   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:35.812601   78080 cri.go:89] found id: ""
	I0729 18:30:35.812636   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.812647   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:35.812654   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:35.812719   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:35.848116   78080 cri.go:89] found id: ""
	I0729 18:30:35.848161   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.848172   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:35.848179   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:35.848240   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:35.895786   78080 cri.go:89] found id: ""
	I0729 18:30:35.895817   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.895829   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:35.895837   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:35.895911   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:35.936753   78080 cri.go:89] found id: ""
	I0729 18:30:35.936780   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.936787   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:35.936794   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:35.936848   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:35.971321   78080 cri.go:89] found id: ""
	I0729 18:30:35.971349   78080 logs.go:276] 0 containers: []
	W0729 18:30:35.971358   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:35.971371   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:35.971434   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:36.018702   78080 cri.go:89] found id: ""
	I0729 18:30:36.018725   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.018732   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:36.018737   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:36.018792   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:36.054829   78080 cri.go:89] found id: ""
	I0729 18:30:36.054865   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.054875   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:36.054882   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:36.054948   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:36.087456   78080 cri.go:89] found id: ""
	I0729 18:30:36.087483   78080 logs.go:276] 0 containers: []
	W0729 18:30:36.087492   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:36.087500   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:36.087512   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:36.140919   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:36.140951   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:36.155581   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:36.155614   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:36.227617   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:36.227642   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:36.227669   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:36.304610   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:36.304651   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:32.935270   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:34.935362   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:35.239258   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:37.242161   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:39.739031   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:36.043489   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:38.542041   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:38.843099   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:38.857571   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:38.857626   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:38.890760   78080 cri.go:89] found id: ""
	I0729 18:30:38.890790   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.890801   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:38.890809   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:38.890884   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:38.932701   78080 cri.go:89] found id: ""
	I0729 18:30:38.932738   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.932748   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:38.932755   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:38.932812   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:38.967379   78080 cri.go:89] found id: ""
	I0729 18:30:38.967406   78080 logs.go:276] 0 containers: []
	W0729 18:30:38.967416   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:38.967430   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:38.967490   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:39.000419   78080 cri.go:89] found id: ""
	I0729 18:30:39.000450   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.000459   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:39.000466   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:39.000528   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:39.033764   78080 cri.go:89] found id: ""
	I0729 18:30:39.033793   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.033802   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:39.033807   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:39.033857   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:39.070904   78080 cri.go:89] found id: ""
	I0729 18:30:39.070933   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.070944   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:39.070951   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:39.071010   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:39.107444   78080 cri.go:89] found id: ""
	I0729 18:30:39.107471   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.107480   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:39.107488   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:39.107549   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:39.141392   78080 cri.go:89] found id: ""
	I0729 18:30:39.141423   78080 logs.go:276] 0 containers: []
	W0729 18:30:39.141436   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:39.141449   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:39.141464   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:39.154874   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:39.154905   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:39.229370   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:39.229396   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:39.229413   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:39.310508   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:39.310538   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:39.352547   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:39.352569   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:41.908463   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:41.922132   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:41.922209   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:41.960404   78080 cri.go:89] found id: ""
	I0729 18:30:41.960431   78080 logs.go:276] 0 containers: []
	W0729 18:30:41.960439   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:41.960444   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:41.960498   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:41.994082   78080 cri.go:89] found id: ""
	I0729 18:30:41.994110   78080 logs.go:276] 0 containers: []
	W0729 18:30:41.994117   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:41.994123   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:41.994177   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:42.030301   78080 cri.go:89] found id: ""
	I0729 18:30:42.030322   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.030330   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:42.030336   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:42.030401   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:42.064310   78080 cri.go:89] found id: ""
	I0729 18:30:42.064339   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.064349   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:42.064356   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:42.064413   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:42.097705   78080 cri.go:89] found id: ""
	I0729 18:30:42.097738   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.097748   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:42.097761   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:42.097819   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:42.133254   78080 cri.go:89] found id: ""
	I0729 18:30:42.133282   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.133292   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:42.133299   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:42.133361   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:42.170028   78080 cri.go:89] found id: ""
	I0729 18:30:42.170054   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.170063   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:42.170075   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:42.170141   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:42.205680   78080 cri.go:89] found id: ""
	I0729 18:30:42.205712   78080 logs.go:276] 0 containers: []
	W0729 18:30:42.205723   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:42.205736   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:42.205749   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:37.442211   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:39.934866   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:41.935293   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:42.240035   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:41.041897   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:43.042300   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:42.246322   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:42.246350   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:42.300852   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:42.300884   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:42.316306   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:42.316333   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:42.389898   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:42.389920   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:42.389934   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:44.971238   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:44.984796   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:44.984846   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:45.021842   78080 cri.go:89] found id: ""
	I0729 18:30:45.021868   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.021877   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:45.021885   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:45.021958   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:45.059353   78080 cri.go:89] found id: ""
	I0729 18:30:45.059377   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.059387   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:45.059394   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:45.059456   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:45.094867   78080 cri.go:89] found id: ""
	I0729 18:30:45.094900   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.094911   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:45.094918   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:45.094974   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:45.128589   78080 cri.go:89] found id: ""
	I0729 18:30:45.128614   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.128622   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:45.128628   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:45.128671   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:45.160137   78080 cri.go:89] found id: ""
	I0729 18:30:45.160165   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.160172   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:45.160177   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:45.160228   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:45.205757   78080 cri.go:89] found id: ""
	I0729 18:30:45.205780   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.205787   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:45.205793   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:45.205840   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:45.250056   78080 cri.go:89] found id: ""
	I0729 18:30:45.250084   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.250091   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:45.250096   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:45.250179   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:45.285349   78080 cri.go:89] found id: ""
	I0729 18:30:45.285372   78080 logs.go:276] 0 containers: []
	W0729 18:30:45.285380   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:45.285389   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:45.285401   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:45.364188   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:45.364218   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:45.412638   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:45.412660   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:45.467713   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:45.467745   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:45.483811   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:45.483835   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:45.564866   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:44.434921   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:46.934237   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:44.740648   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:47.239253   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:49.240229   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:45.043415   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:47.542757   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:49.543251   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:48.065579   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:48.079441   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:48.079511   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:48.115540   78080 cri.go:89] found id: ""
	I0729 18:30:48.115569   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.115578   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:48.115586   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:48.115670   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:48.151810   78080 cri.go:89] found id: ""
	I0729 18:30:48.151834   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.151841   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:48.151847   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:48.151913   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:48.187459   78080 cri.go:89] found id: ""
	I0729 18:30:48.187490   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.187500   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:48.187508   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:48.187568   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:48.226804   78080 cri.go:89] found id: ""
	I0729 18:30:48.226835   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.226846   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:48.226853   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:48.226916   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:48.260413   78080 cri.go:89] found id: ""
	I0729 18:30:48.260439   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.260448   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:48.260455   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:48.260517   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:48.296719   78080 cri.go:89] found id: ""
	I0729 18:30:48.296743   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.296751   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:48.296756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:48.296806   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:48.331969   78080 cri.go:89] found id: ""
	I0729 18:30:48.331995   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.332002   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:48.332008   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:48.332055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:48.370593   78080 cri.go:89] found id: ""
	I0729 18:30:48.370618   78080 logs.go:276] 0 containers: []
	W0729 18:30:48.370626   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:48.370634   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:48.370645   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:48.410653   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:48.410679   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:48.465467   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:48.465503   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:48.480025   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:48.480053   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:48.557806   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:48.557824   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:48.557840   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:51.140743   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:51.153970   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:51.154046   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:51.187826   78080 cri.go:89] found id: ""
	I0729 18:30:51.187851   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.187862   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:51.187868   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:51.187922   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:51.226140   78080 cri.go:89] found id: ""
	I0729 18:30:51.226172   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.226182   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:51.226189   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:51.226255   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:51.262321   78080 cri.go:89] found id: ""
	I0729 18:30:51.262349   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.262357   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:51.262378   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:51.262440   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:51.295356   78080 cri.go:89] found id: ""
	I0729 18:30:51.295383   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.295395   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:51.295403   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:51.295467   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:51.328320   78080 cri.go:89] found id: ""
	I0729 18:30:51.328349   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.328361   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:51.328367   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:51.328424   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:51.364202   78080 cri.go:89] found id: ""
	I0729 18:30:51.364233   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.364242   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:51.364249   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:51.364313   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:51.405500   78080 cri.go:89] found id: ""
	I0729 18:30:51.405529   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.405538   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:51.405544   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:51.405606   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:51.443519   78080 cri.go:89] found id: ""
	I0729 18:30:51.443541   78080 logs.go:276] 0 containers: []
	W0729 18:30:51.443548   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:51.443556   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:51.443567   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:51.495560   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:51.495599   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:51.512152   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:51.512178   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:51.590972   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:51.590992   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:51.591021   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:51.688717   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:51.688757   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:48.934577   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:51.437173   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:51.739680   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.238626   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:52.044254   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.545288   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:54.256011   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:54.270602   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:54.270653   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:54.311547   78080 cri.go:89] found id: ""
	I0729 18:30:54.311574   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.311584   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:54.311592   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:54.311655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:54.347559   78080 cri.go:89] found id: ""
	I0729 18:30:54.347591   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.347602   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:54.347610   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:54.347675   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:54.382180   78080 cri.go:89] found id: ""
	I0729 18:30:54.382205   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.382212   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:54.382217   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:54.382264   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:54.415560   78080 cri.go:89] found id: ""
	I0729 18:30:54.415587   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.415594   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:54.415600   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:54.415655   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:54.450313   78080 cri.go:89] found id: ""
	I0729 18:30:54.450341   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.450351   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:54.450372   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:54.450439   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:54.484649   78080 cri.go:89] found id: ""
	I0729 18:30:54.484678   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.484687   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:54.484694   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:54.484741   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:54.520170   78080 cri.go:89] found id: ""
	I0729 18:30:54.520204   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.520212   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:54.520220   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:54.520270   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:54.562724   78080 cri.go:89] found id: ""
	I0729 18:30:54.562753   78080 logs.go:276] 0 containers: []
	W0729 18:30:54.562762   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:54.562772   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:54.562788   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:54.617461   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:54.617498   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:30:54.630970   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:54.630993   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:54.699332   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:54.699353   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:54.699366   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:54.779240   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:54.779276   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:53.934151   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:56.434549   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:56.239554   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:58.239583   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:57.041845   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:59.042164   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:30:57.318673   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:30:57.332789   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:30:57.332845   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:30:57.370434   78080 cri.go:89] found id: ""
	I0729 18:30:57.370461   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.370486   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:30:57.370492   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:30:57.370547   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:30:57.420694   78080 cri.go:89] found id: ""
	I0729 18:30:57.420724   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.420735   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:30:57.420742   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:30:57.420808   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:30:57.469245   78080 cri.go:89] found id: ""
	I0729 18:30:57.469271   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.469282   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:30:57.469288   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:30:57.469355   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:30:57.524937   78080 cri.go:89] found id: ""
	I0729 18:30:57.524963   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.524970   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:30:57.524976   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:30:57.525031   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:30:57.566803   78080 cri.go:89] found id: ""
	I0729 18:30:57.566830   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.566840   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:30:57.566847   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:30:57.566910   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:30:57.602786   78080 cri.go:89] found id: ""
	I0729 18:30:57.602814   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.602821   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:30:57.602826   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:30:57.602891   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:30:57.639319   78080 cri.go:89] found id: ""
	I0729 18:30:57.639347   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.639355   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:30:57.639361   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:30:57.639408   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:30:57.672580   78080 cri.go:89] found id: ""
	I0729 18:30:57.672610   78080 logs.go:276] 0 containers: []
	W0729 18:30:57.672621   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:30:57.672632   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:30:57.672647   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:30:57.751550   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:30:57.751572   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:30:57.751586   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:30:57.840057   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:30:57.840097   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:57.884698   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:30:57.884737   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:30:57.944468   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:30:57.944497   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:00.459605   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:00.473079   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:00.473138   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:00.508492   78080 cri.go:89] found id: ""
	I0729 18:31:00.508525   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.508536   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:00.508543   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:00.508604   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:00.544844   78080 cri.go:89] found id: ""
	I0729 18:31:00.544875   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.544886   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:00.544899   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:00.544960   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:00.578402   78080 cri.go:89] found id: ""
	I0729 18:31:00.578432   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.578443   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:00.578450   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:00.578508   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:00.611886   78080 cri.go:89] found id: ""
	I0729 18:31:00.611913   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.611922   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:00.611928   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:00.611989   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:00.649126   78080 cri.go:89] found id: ""
	I0729 18:31:00.649153   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.649162   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:00.649168   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:00.649229   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:00.686534   78080 cri.go:89] found id: ""
	I0729 18:31:00.686561   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.686571   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:00.686578   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:00.686639   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:00.718656   78080 cri.go:89] found id: ""
	I0729 18:31:00.718680   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.718690   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:00.718696   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:00.718755   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:00.752740   78080 cri.go:89] found id: ""
	I0729 18:31:00.752766   78080 logs.go:276] 0 containers: []
	W0729 18:31:00.752776   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:00.752786   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:00.752800   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:00.804293   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:00.804323   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:00.817988   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:00.818010   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:00.892178   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:00.892210   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:00.892231   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:00.973164   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:00.973199   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:30:58.434888   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:00.934518   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:00.239908   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:02.240038   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:04.240420   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:01.542080   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:03.542877   77627 pod_ready.go:102] pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:04.036213   77627 pod_ready.go:81] duration metric: took 4m0.000109353s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" ...
	E0729 18:31:04.036235   77627 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-flh27" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 18:31:04.036250   77627 pod_ready.go:38] duration metric: took 4m10.564329435s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:04.036294   77627 kubeadm.go:597] duration metric: took 4m18.357564209s to restartPrimaryControlPlane
	W0729 18:31:04.036359   77627 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:31:04.036388   77627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:31:03.512105   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:03.526536   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:03.526602   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:03.561579   78080 cri.go:89] found id: ""
	I0729 18:31:03.561604   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.561614   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:03.561621   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:03.561681   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:03.603995   78080 cri.go:89] found id: ""
	I0729 18:31:03.604019   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.604028   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:03.604033   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:03.604079   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:03.640879   78080 cri.go:89] found id: ""
	I0729 18:31:03.640902   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.640910   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:03.640917   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:03.640971   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:03.675262   78080 cri.go:89] found id: ""
	I0729 18:31:03.675288   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.675296   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:03.675302   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:03.675349   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:03.708094   78080 cri.go:89] found id: ""
	I0729 18:31:03.708128   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.708137   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:03.708142   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:03.708190   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:03.748262   78080 cri.go:89] found id: ""
	I0729 18:31:03.748287   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.748298   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:03.748304   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:03.748360   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:03.789758   78080 cri.go:89] found id: ""
	I0729 18:31:03.789788   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.789800   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:03.789806   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:03.789893   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:03.829253   78080 cri.go:89] found id: ""
	I0729 18:31:03.829280   78080 logs.go:276] 0 containers: []
	W0729 18:31:03.829291   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:03.829299   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:03.829317   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:03.883012   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:03.883044   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:03.899264   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:03.899294   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:03.970241   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:03.970261   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:03.970274   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:04.056205   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:04.056244   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:06.604919   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:06.619163   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:06.619242   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:06.656939   78080 cri.go:89] found id: ""
	I0729 18:31:06.656970   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.656982   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:06.656989   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:06.657075   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:06.692577   78080 cri.go:89] found id: ""
	I0729 18:31:06.692608   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.692624   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:06.692632   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:06.692695   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:06.730045   78080 cri.go:89] found id: ""
	I0729 18:31:06.730077   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.730088   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:06.730096   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:06.730179   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:06.771794   78080 cri.go:89] found id: ""
	I0729 18:31:06.771820   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.771830   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:06.771838   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:06.771905   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:06.806149   78080 cri.go:89] found id: ""
	I0729 18:31:06.806177   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.806187   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:06.806194   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:06.806252   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:06.851875   78080 cri.go:89] found id: ""
	I0729 18:31:06.851905   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.851923   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:06.851931   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:06.851996   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:06.890335   78080 cri.go:89] found id: ""
	I0729 18:31:06.890382   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.890393   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:06.890399   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:06.890460   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:06.928873   78080 cri.go:89] found id: ""
	I0729 18:31:06.928902   78080 logs.go:276] 0 containers: []
	W0729 18:31:06.928912   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:06.928922   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:06.928935   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:06.944269   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:06.944295   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:07.011658   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:07.011682   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:07.011697   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:07.109899   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:07.109948   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:07.154569   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:07.154600   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:02.935054   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:05.434752   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:06.242994   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:08.738448   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:09.709101   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:09.722387   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:09.722461   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:09.760443   78080 cri.go:89] found id: ""
	I0729 18:31:09.760471   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.760481   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:09.760488   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:09.760551   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:09.796177   78080 cri.go:89] found id: ""
	I0729 18:31:09.796200   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.796209   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:09.796214   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:09.796264   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:09.831955   78080 cri.go:89] found id: ""
	I0729 18:31:09.831983   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.831990   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:09.831995   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:09.832055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:09.863913   78080 cri.go:89] found id: ""
	I0729 18:31:09.863939   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.863949   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:09.863956   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:09.864014   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:09.897553   78080 cri.go:89] found id: ""
	I0729 18:31:09.897575   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.897583   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:09.897588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:09.897645   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:09.935203   78080 cri.go:89] found id: ""
	I0729 18:31:09.935221   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.935228   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:09.935238   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:09.935296   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:09.971098   78080 cri.go:89] found id: ""
	I0729 18:31:09.971125   78080 logs.go:276] 0 containers: []
	W0729 18:31:09.971135   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:09.971142   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:09.971224   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:10.006760   78080 cri.go:89] found id: ""
	I0729 18:31:10.006794   78080 logs.go:276] 0 containers: []
	W0729 18:31:10.006804   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:10.006815   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:10.006830   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:10.056037   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:10.056066   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:10.070633   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:10.070660   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:10.139953   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:10.139983   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:10.140002   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:10.220748   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:10.220781   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:07.436020   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:09.934218   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:11.934977   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:10.740109   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:13.239440   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:12.766391   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:12.779837   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:12.779889   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:12.813910   78080 cri.go:89] found id: ""
	I0729 18:31:12.813941   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.813951   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:12.813959   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:12.814008   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:12.848811   78080 cri.go:89] found id: ""
	I0729 18:31:12.848854   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.848865   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:12.848872   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:12.848927   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:12.884740   78080 cri.go:89] found id: ""
	I0729 18:31:12.884769   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.884780   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:12.884786   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:12.884833   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:12.923826   78080 cri.go:89] found id: ""
	I0729 18:31:12.923859   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.923870   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:12.923878   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:12.923930   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:12.959127   78080 cri.go:89] found id: ""
	I0729 18:31:12.959157   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.959168   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:12.959175   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:12.959245   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:12.994384   78080 cri.go:89] found id: ""
	I0729 18:31:12.994417   78080 logs.go:276] 0 containers: []
	W0729 18:31:12.994430   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:12.994439   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:12.994506   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:13.027854   78080 cri.go:89] found id: ""
	I0729 18:31:13.027883   78080 logs.go:276] 0 containers: []
	W0729 18:31:13.027892   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:13.027897   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:13.027951   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:13.062270   78080 cri.go:89] found id: ""
	I0729 18:31:13.062300   78080 logs.go:276] 0 containers: []
	W0729 18:31:13.062310   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:13.062321   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:13.062334   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:13.114473   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:13.114500   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:13.127820   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:13.127845   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:13.195830   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:13.195848   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:13.195862   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:13.281711   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:13.281748   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:15.824456   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:15.837532   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:15.837587   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:15.871706   78080 cri.go:89] found id: ""
	I0729 18:31:15.871739   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.871750   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:15.871757   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:15.871817   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:15.906882   78080 cri.go:89] found id: ""
	I0729 18:31:15.906905   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.906912   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:15.906917   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:15.906976   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:15.943015   78080 cri.go:89] found id: ""
	I0729 18:31:15.943043   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.943057   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:15.943065   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:15.943126   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:15.980501   78080 cri.go:89] found id: ""
	I0729 18:31:15.980528   78080 logs.go:276] 0 containers: []
	W0729 18:31:15.980536   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:15.980542   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:15.980588   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:16.014148   78080 cri.go:89] found id: ""
	I0729 18:31:16.014176   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.014183   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:16.014189   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:16.014236   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:16.048296   78080 cri.go:89] found id: ""
	I0729 18:31:16.048319   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.048326   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:16.048334   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:16.048392   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:16.084328   78080 cri.go:89] found id: ""
	I0729 18:31:16.084350   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.084358   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:16.084363   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:16.084411   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:16.120048   78080 cri.go:89] found id: ""
	I0729 18:31:16.120076   78080 logs.go:276] 0 containers: []
	W0729 18:31:16.120084   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:16.120092   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:16.120105   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:16.173476   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:16.173503   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:16.190200   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:16.190232   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:16.261993   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:16.262014   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:16.262026   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:16.340298   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:16.340331   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:14.434706   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:16.936150   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:15.739493   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:18.239834   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:18.883152   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:18.897292   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:18.897360   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:18.931276   78080 cri.go:89] found id: ""
	I0729 18:31:18.931303   78080 logs.go:276] 0 containers: []
	W0729 18:31:18.931313   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:18.931321   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:18.931379   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:18.975803   78080 cri.go:89] found id: ""
	I0729 18:31:18.975832   78080 logs.go:276] 0 containers: []
	W0729 18:31:18.975843   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:18.975853   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:18.975912   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:19.012920   78080 cri.go:89] found id: ""
	I0729 18:31:19.012951   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.012963   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:19.012970   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:19.013031   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:19.047640   78080 cri.go:89] found id: ""
	I0729 18:31:19.047667   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.047679   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:19.047687   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:19.047749   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:19.082495   78080 cri.go:89] found id: ""
	I0729 18:31:19.082522   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.082533   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:19.082540   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:19.082591   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:19.117988   78080 cri.go:89] found id: ""
	I0729 18:31:19.118016   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.118027   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:19.118034   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:19.118096   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:19.153725   78080 cri.go:89] found id: ""
	I0729 18:31:19.153753   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.153764   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:19.153771   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:19.153836   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:19.192827   78080 cri.go:89] found id: ""
	I0729 18:31:19.192857   78080 logs.go:276] 0 containers: []
	W0729 18:31:19.192868   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:19.192879   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:19.192894   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:19.208802   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:19.208833   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:19.285877   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:19.285897   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:19.285909   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:19.366563   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:19.366598   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:19.404563   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:19.404590   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:21.958449   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:21.971674   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:21.971739   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:22.006231   78080 cri.go:89] found id: ""
	I0729 18:31:22.006253   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.006261   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:22.006266   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:22.006314   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:22.042575   78080 cri.go:89] found id: ""
	I0729 18:31:22.042599   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.042609   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:22.042616   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:22.042679   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:22.079446   78080 cri.go:89] found id: ""
	I0729 18:31:22.079471   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.079482   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:22.079489   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:22.079554   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:22.115940   78080 cri.go:89] found id: ""
	I0729 18:31:22.115967   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.115976   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:22.115984   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:22.116055   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:22.149420   78080 cri.go:89] found id: ""
	I0729 18:31:22.149447   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.149456   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:22.149461   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:22.149511   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:22.182992   78080 cri.go:89] found id: ""
	I0729 18:31:22.183019   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.183027   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:22.183032   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:22.183090   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:22.218441   78080 cri.go:89] found id: ""
	I0729 18:31:22.218474   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.218487   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:22.218497   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:22.218564   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:19.434020   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:21.434806   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:20.739308   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:22.741502   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:22.263135   78080 cri.go:89] found id: ""
	I0729 18:31:22.263164   78080 logs.go:276] 0 containers: []
	W0729 18:31:22.263173   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:22.263183   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:22.263198   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:22.319010   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:22.319049   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:22.333151   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:22.333179   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:22.404661   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:22.404683   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:22.404706   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:22.488497   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:22.488537   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:25.032215   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:25.045114   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:25.045191   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:25.082244   78080 cri.go:89] found id: ""
	I0729 18:31:25.082278   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.082289   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:25.082299   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:25.082388   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:25.118295   78080 cri.go:89] found id: ""
	I0729 18:31:25.118318   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.118325   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:25.118331   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:25.118395   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:25.157948   78080 cri.go:89] found id: ""
	I0729 18:31:25.157974   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.157984   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:25.157992   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:25.158054   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:25.194708   78080 cri.go:89] found id: ""
	I0729 18:31:25.194734   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.194743   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:25.194751   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:25.194813   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:25.235923   78080 cri.go:89] found id: ""
	I0729 18:31:25.235952   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.235962   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:25.235969   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:25.236032   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:25.271316   78080 cri.go:89] found id: ""
	I0729 18:31:25.271342   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.271353   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:25.271360   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:25.271422   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:25.309399   78080 cri.go:89] found id: ""
	I0729 18:31:25.309427   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.309438   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:25.309446   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:25.309503   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:25.347979   78080 cri.go:89] found id: ""
	I0729 18:31:25.348009   78080 logs.go:276] 0 containers: []
	W0729 18:31:25.348021   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:25.348031   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:25.348046   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:25.400785   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:25.400812   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:25.413891   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:25.413915   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:25.487721   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:25.487752   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:25.487767   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:25.575500   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:25.575531   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:23.935200   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:26.434289   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:25.240961   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:27.738838   77859 pod_ready.go:102] pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:27.738866   77859 pod_ready.go:81] duration metric: took 4m0.005785253s for pod "metrics-server-569cc877fc-bm8tm" in "kube-system" namespace to be "Ready" ...
	E0729 18:31:27.738877   77859 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 18:31:27.738887   77859 pod_ready.go:38] duration metric: took 4m4.550102816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:27.738903   77859 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:31:27.738934   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:27.738991   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:27.798686   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:27.798710   77859 cri.go:89] found id: ""
	I0729 18:31:27.798717   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:27.798774   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.804769   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:27.804827   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:27.849829   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:27.849849   77859 cri.go:89] found id: ""
	I0729 18:31:27.849857   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:27.849909   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.854472   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:27.854540   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:27.891637   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:27.891659   77859 cri.go:89] found id: ""
	I0729 18:31:27.891668   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:27.891715   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.896663   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:27.896713   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:27.941948   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:27.941968   77859 cri.go:89] found id: ""
	I0729 18:31:27.941976   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:27.942018   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.946770   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:27.946821   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:27.988118   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:27.988139   77859 cri.go:89] found id: ""
	I0729 18:31:27.988147   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:27.988193   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:27.992474   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:27.992535   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:28.032779   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:28.032801   77859 cri.go:89] found id: ""
	I0729 18:31:28.032811   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:28.032859   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.037791   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:28.037838   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:28.081087   77859 cri.go:89] found id: ""
	I0729 18:31:28.081115   77859 logs.go:276] 0 containers: []
	W0729 18:31:28.081124   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:28.081131   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:28.081183   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:28.123906   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:28.123927   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:28.123933   77859 cri.go:89] found id: ""
	I0729 18:31:28.123940   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:28.123979   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.128737   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:28.133127   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:28.133201   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:28.182950   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:28.182985   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:28.241873   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:28.241914   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:28.391355   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:28.391389   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:28.447637   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:28.447671   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:28.496815   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:28.496848   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:28.540617   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:28.540651   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:29.063074   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:29.063116   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:29.123348   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:29.123378   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:29.137340   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:29.137365   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:29.174775   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:29.174810   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:29.227526   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:29.227560   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:29.281814   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:29.281844   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:28.121761   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:28.136756   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:28.136813   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:28.175461   78080 cri.go:89] found id: ""
	I0729 18:31:28.175491   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.175502   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:31:28.175509   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:28.175567   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:28.215024   78080 cri.go:89] found id: ""
	I0729 18:31:28.215046   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.215055   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:31:28.215060   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:28.215122   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:28.253999   78080 cri.go:89] found id: ""
	I0729 18:31:28.254023   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.254031   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:31:28.254037   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:28.254090   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:28.287902   78080 cri.go:89] found id: ""
	I0729 18:31:28.287929   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.287940   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:31:28.287948   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:28.288006   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:28.322390   78080 cri.go:89] found id: ""
	I0729 18:31:28.322422   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.322433   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:31:28.322441   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:28.322500   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:28.356951   78080 cri.go:89] found id: ""
	I0729 18:31:28.356980   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.356991   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:31:28.356999   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:28.357060   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:28.393439   78080 cri.go:89] found id: ""
	I0729 18:31:28.393461   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.393471   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:28.393477   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:31:28.393535   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:31:28.431827   78080 cri.go:89] found id: ""
	I0729 18:31:28.431858   78080 logs.go:276] 0 containers: []
	W0729 18:31:28.431868   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:31:28.431878   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:28.431892   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:28.509279   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:31:28.509315   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:28.564036   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:28.564064   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:28.626970   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:28.627000   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:28.641417   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:28.641446   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:31:28.713406   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:31:31.213942   78080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:31.228942   78080 kubeadm.go:597] duration metric: took 4m3.040952507s to restartPrimaryControlPlane
	W0729 18:31:31.229020   78080 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:31:31.229042   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:31:31.696335   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:31.711230   78080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:31:31.720924   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:31:31.730348   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:31:31.730378   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:31:31.730418   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:31:31.739761   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:31:31.739810   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:31:31.749021   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:31:31.758107   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:31:31.758155   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:31:31.768326   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:31:31.777347   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:31:31.777388   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:31:31.786752   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:31:31.795728   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:31:31.795776   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:31:31.805369   78080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:31:31.883678   78080 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:31:31.883751   78080 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:31:32.040989   78080 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:31:32.041127   78080 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:31:32.041259   78080 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:31:32.261525   78080 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:31:28.434784   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:30.435227   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:32.263137   78080 out.go:204]   - Generating certificates and keys ...
	I0729 18:31:32.263242   78080 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:31:32.263349   78080 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:31:32.263461   78080 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:31:32.263554   78080 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:31:32.263640   78080 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:31:32.263724   78080 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:31:32.263801   78080 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:31:32.263872   78080 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:31:32.263993   78080 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:31:32.264109   78080 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:31:32.264164   78080 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:31:32.264255   78080 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:31:32.435248   78080 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:31:32.509478   78080 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:31:32.737003   78080 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:31:33.079523   78080 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:31:33.099871   78080 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:31:33.101450   78080 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:31:33.101520   78080 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:31:33.242577   78080 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:31:31.826678   77859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:31:31.845448   77859 api_server.go:72] duration metric: took 4m16.365262679s to wait for apiserver process to appear ...
	I0729 18:31:31.845478   77859 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:31:31.845519   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:31.845568   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:31.889194   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:31.889226   77859 cri.go:89] found id: ""
	I0729 18:31:31.889236   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:31.889290   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.894167   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:31.894271   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:31.936287   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:31.936306   77859 cri.go:89] found id: ""
	I0729 18:31:31.936315   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:31.936367   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.941051   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:31.941110   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:31.978033   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:31.978057   77859 cri.go:89] found id: ""
	I0729 18:31:31.978066   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:31.978115   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:31.982632   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:31.982704   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:32.023792   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:32.023812   77859 cri.go:89] found id: ""
	I0729 18:31:32.023820   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:32.023875   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.028309   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:32.028367   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:32.071944   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:32.071966   77859 cri.go:89] found id: ""
	I0729 18:31:32.071975   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:32.072033   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.076171   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:32.076252   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:32.111357   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:32.111379   77859 cri.go:89] found id: ""
	I0729 18:31:32.111389   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:32.111446   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.115718   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:32.115775   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:32.168552   77859 cri.go:89] found id: ""
	I0729 18:31:32.168586   77859 logs.go:276] 0 containers: []
	W0729 18:31:32.168597   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:32.168604   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:32.168686   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:32.210002   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:32.210027   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:32.210034   77859 cri.go:89] found id: ""
	I0729 18:31:32.210043   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:32.210090   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.214929   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:32.220097   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:32.220121   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:32.270343   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:32.270384   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:32.329269   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:32.329303   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:32.388361   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:32.388388   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:32.430072   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:32.430108   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:32.471669   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:32.471701   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:32.508395   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:32.508424   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:32.548968   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:32.549001   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:32.605269   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:32.605306   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:32.642298   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:32.642330   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:32.659407   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:32.659431   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:32.776509   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:32.776544   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:32.832365   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:32.832395   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:35.748109   77627 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.711694865s)
	I0729 18:31:35.748184   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:35.765137   77627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:31:35.775945   77627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:31:35.786206   77627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:31:35.786232   77627 kubeadm.go:157] found existing configuration files:
	
	I0729 18:31:35.786284   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:31:35.797157   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:31:35.797218   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:31:35.810497   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:31:35.821537   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:31:35.821603   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:31:35.832985   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:31:35.842247   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:31:35.842309   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:31:35.852578   77627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:31:35.861798   77627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:31:35.861858   77627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:31:35.872903   77627 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:31:35.926675   77627 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 18:31:35.926872   77627 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:31:36.089002   77627 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:31:36.089179   77627 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:31:36.089310   77627 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:31:36.321844   77627 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:31:33.244436   78080 out.go:204]   - Booting up control plane ...
	I0729 18:31:33.244570   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:31:33.245677   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:31:33.249530   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:31:33.250262   78080 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:31:33.261418   78080 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:31:36.324255   77627 out.go:204]   - Generating certificates and keys ...
	I0729 18:31:36.324352   77627 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:31:36.324435   77627 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:31:36.324539   77627 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:31:36.324619   77627 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:31:36.324707   77627 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:31:36.324780   77627 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:31:36.324864   77627 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:31:36.324945   77627 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:31:36.325036   77627 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:31:36.325175   77627 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:31:36.325340   77627 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:31:36.325425   77627 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:31:36.815491   77627 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:31:36.870914   77627 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:31:36.957705   77627 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:31:37.074845   77627 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:31:37.220920   77627 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:31:37.221651   77627 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:31:37.224384   77627 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:31:32.435653   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:34.933615   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:36.935070   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:35.792366   77859 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8444/healthz ...
	I0729 18:31:35.801160   77859 api_server.go:279] https://192.168.61.244:8444/healthz returned 200:
	ok
	I0729 18:31:35.804043   77859 api_server.go:141] control plane version: v1.30.3
	I0729 18:31:35.804063   77859 api_server.go:131] duration metric: took 3.958578435s to wait for apiserver health ...
	I0729 18:31:35.804072   77859 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:31:35.804099   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:31:35.804140   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:31:35.845977   77859 cri.go:89] found id: "630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:35.846003   77859 cri.go:89] found id: ""
	I0729 18:31:35.846018   77859 logs.go:276] 1 containers: [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4]
	I0729 18:31:35.846072   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.851227   77859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:31:35.851302   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:31:35.892117   77859 cri.go:89] found id: "fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:35.892142   77859 cri.go:89] found id: ""
	I0729 18:31:35.892158   77859 logs.go:276] 1 containers: [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a]
	I0729 18:31:35.892215   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.897136   77859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:31:35.897216   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:31:35.941512   77859 cri.go:89] found id: "2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:35.941532   77859 cri.go:89] found id: ""
	I0729 18:31:35.941541   77859 logs.go:276] 1 containers: [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b]
	I0729 18:31:35.941598   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.946072   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:31:35.946124   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:31:35.984306   77859 cri.go:89] found id: "991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:35.984327   77859 cri.go:89] found id: ""
	I0729 18:31:35.984335   77859 logs.go:276] 1 containers: [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd]
	I0729 18:31:35.984381   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:35.988605   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:31:35.988671   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:31:36.031476   77859 cri.go:89] found id: "ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:36.031504   77859 cri.go:89] found id: ""
	I0729 18:31:36.031514   77859 logs.go:276] 1 containers: [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9]
	I0729 18:31:36.031567   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.037262   77859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:31:36.037319   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:31:36.078054   77859 cri.go:89] found id: "92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:36.078076   77859 cri.go:89] found id: ""
	I0729 18:31:36.078084   77859 logs.go:276] 1 containers: [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc]
	I0729 18:31:36.078134   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.082628   77859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:31:36.082693   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:31:36.122768   77859 cri.go:89] found id: ""
	I0729 18:31:36.122791   77859 logs.go:276] 0 containers: []
	W0729 18:31:36.122799   77859 logs.go:278] No container was found matching "kindnet"
	I0729 18:31:36.122804   77859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 18:31:36.122849   77859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 18:31:36.166611   77859 cri.go:89] found id: "9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:36.166636   77859 cri.go:89] found id: "482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:36.166642   77859 cri.go:89] found id: ""
	I0729 18:31:36.166650   77859 logs.go:276] 2 containers: [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b]
	I0729 18:31:36.166712   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.171240   77859 ssh_runner.go:195] Run: which crictl
	I0729 18:31:36.175336   77859 logs.go:123] Gathering logs for kube-controller-manager [92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc] ...
	I0729 18:31:36.175354   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b99f54da092dc49ec74469cf8be9fb25f6d2d69aad04710adb34e165003cbc"
	I0729 18:31:36.233224   77859 logs.go:123] Gathering logs for storage-provisioner [9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481] ...
	I0729 18:31:36.233255   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d54b3da125ce5b99c3a3bcefb3c8bd0dbffdaaae0e6e538b0e74a8375edb481"
	I0729 18:31:36.282788   77859 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:31:36.282820   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:31:36.675615   77859 logs.go:123] Gathering logs for kubelet ...
	I0729 18:31:36.675660   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:31:36.731559   77859 logs.go:123] Gathering logs for dmesg ...
	I0729 18:31:36.731602   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:31:36.747814   77859 logs.go:123] Gathering logs for kube-scheduler [991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd] ...
	I0729 18:31:36.747845   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 991e6d9556b66b11a7a63fd85b1c548e75506ae057966ffee38661ef60bb21fd"
	I0729 18:31:36.786940   77859 logs.go:123] Gathering logs for kube-proxy [ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9] ...
	I0729 18:31:36.787036   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec56fb749b981dfdce93463b65a413ae2fbf12ff3862c46abd434cb382cc5ff9"
	I0729 18:31:36.829659   77859 logs.go:123] Gathering logs for storage-provisioner [482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b] ...
	I0729 18:31:36.829694   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482ca3200e17e64e9a53833e5dc6edf6daefdc480682731cc7998e518422a96b"
	I0729 18:31:36.865907   77859 logs.go:123] Gathering logs for container status ...
	I0729 18:31:36.865939   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:31:36.908399   77859 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:31:36.908427   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:31:37.012220   77859 logs.go:123] Gathering logs for kube-apiserver [630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4] ...
	I0729 18:31:37.012255   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 630d0a93e04a31c8c37233496c30ed940e13e754c08ccf288b80e5b73ad59af4"
	I0729 18:31:37.063429   77859 logs.go:123] Gathering logs for etcd [fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a] ...
	I0729 18:31:37.063463   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fec93784adcb55d2585c855fc81e5603fe0cdff3d071aec4d0bea0dd1a44da4a"
	I0729 18:31:37.107615   77859 logs.go:123] Gathering logs for coredns [2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b] ...
	I0729 18:31:37.107654   77859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cc4240a68e0bc6f3dcaa6f25e8187ee68948f5e46c945b4125802bca56e3b"
	I0729 18:31:39.655973   77859 system_pods.go:59] 8 kube-system pods found
	I0729 18:31:39.656011   77859 system_pods.go:61] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running
	I0729 18:31:39.656019   77859 system_pods.go:61] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running
	I0729 18:31:39.656025   77859 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running
	I0729 18:31:39.656032   77859 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running
	I0729 18:31:39.656037   77859 system_pods.go:61] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running
	I0729 18:31:39.656043   77859 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running
	I0729 18:31:39.656051   77859 system_pods.go:61] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:31:39.656057   77859 system_pods.go:61] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running
	I0729 18:31:39.656068   77859 system_pods.go:74] duration metric: took 3.851988452s to wait for pod list to return data ...
	I0729 18:31:39.656081   77859 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:31:39.658999   77859 default_sa.go:45] found service account: "default"
	I0729 18:31:39.659024   77859 default_sa.go:55] duration metric: took 2.935237ms for default service account to be created ...
	I0729 18:31:39.659034   77859 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:31:39.664926   77859 system_pods.go:86] 8 kube-system pods found
	I0729 18:31:39.664952   77859 system_pods.go:89] "coredns-7db6d8ff4d-mk6mx" [e005b1f9-cc7a-45aa-915e-85a461ebc814] Running
	I0729 18:31:39.664959   77859 system_pods.go:89] "etcd-default-k8s-diff-port-502055" [72b552cc-67b0-46bf-b3dd-b6732ebe8493] Running
	I0729 18:31:39.664966   77859 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-502055" [0dc22dbc-667e-4d6f-9938-b13bf3503f79] Running
	I0729 18:31:39.664973   77859 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-502055" [4df00b98-12cf-4359-9d98-8cce6ee9708a] Running
	I0729 18:31:39.664979   77859 system_pods.go:89] "kube-proxy-cgdm8" [57a99bb3-9e63-47dd-a958-5be7f3c0a9c0] Running
	I0729 18:31:39.664987   77859 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-502055" [247b7cd1-6267-469d-af05-b33b284ae846] Running
	I0729 18:31:39.665003   77859 system_pods.go:89] "metrics-server-569cc877fc-bm8tm" [6891d9ee-82db-4307-adf1-ff60d35506bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:31:39.665013   77859 system_pods.go:89] "storage-provisioner" [c2264d30-60dc-41f9-9b84-3b073031cf1b] Running
	I0729 18:31:39.665025   77859 system_pods.go:126] duration metric: took 5.974722ms to wait for k8s-apps to be running ...
	I0729 18:31:39.665036   77859 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:31:39.665093   77859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:31:39.685280   77859 system_svc.go:56] duration metric: took 20.237099ms WaitForService to wait for kubelet
	I0729 18:31:39.685311   77859 kubeadm.go:582] duration metric: took 4m24.205126513s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:31:39.685336   77859 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:31:39.688419   77859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:31:39.688441   77859 node_conditions.go:123] node cpu capacity is 2
	I0729 18:31:39.688455   77859 node_conditions.go:105] duration metric: took 3.111768ms to run NodePressure ...
	I0729 18:31:39.688470   77859 start.go:241] waiting for startup goroutines ...
	I0729 18:31:39.688483   77859 start.go:246] waiting for cluster config update ...
	I0729 18:31:39.688497   77859 start.go:255] writing updated cluster config ...
	I0729 18:31:39.688830   77859 ssh_runner.go:195] Run: rm -f paused
	I0729 18:31:39.739685   77859 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:31:39.741763   77859 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-502055" cluster and "default" namespace by default
	I0729 18:31:37.226046   77627 out.go:204]   - Booting up control plane ...
	I0729 18:31:37.226163   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:31:37.227852   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:31:37.228710   77627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:31:37.248177   77627 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:31:37.248863   77627 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:31:37.248915   77627 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:31:37.376905   77627 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:31:37.377030   77627 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:31:37.878928   77627 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.066447ms
	I0729 18:31:37.879057   77627 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:31:38.935622   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:41.433736   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:42.880479   77627 kubeadm.go:310] [api-check] The API server is healthy after 5.001345894s
	I0729 18:31:42.892513   77627 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:31:42.910175   77627 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:31:42.948111   77627 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:31:42.948340   77627 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-409322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:31:42.966823   77627 kubeadm.go:310] [bootstrap-token] Using token: f8a98i.3r2is78gllm02lfe
	I0729 18:31:42.968170   77627 out.go:204]   - Configuring RBAC rules ...
	I0729 18:31:42.968304   77627 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:31:42.978257   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:31:42.986458   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:31:42.989744   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:31:42.992484   77627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:31:42.995162   77627 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:31:43.287739   77627 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:31:43.726370   77627 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:31:44.290225   77627 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:31:44.291166   77627 kubeadm.go:310] 
	I0729 18:31:44.291267   77627 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:31:44.291278   77627 kubeadm.go:310] 
	I0729 18:31:44.291392   77627 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:31:44.291401   77627 kubeadm.go:310] 
	I0729 18:31:44.291436   77627 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:31:44.291530   77627 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:31:44.291589   77627 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:31:44.291606   77627 kubeadm.go:310] 
	I0729 18:31:44.291701   77627 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:31:44.291713   77627 kubeadm.go:310] 
	I0729 18:31:44.291788   77627 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:31:44.291797   77627 kubeadm.go:310] 
	I0729 18:31:44.291860   77627 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:31:44.291954   77627 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:31:44.292052   77627 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:31:44.292070   77627 kubeadm.go:310] 
	I0729 18:31:44.292167   77627 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:31:44.292269   77627 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:31:44.292280   77627 kubeadm.go:310] 
	I0729 18:31:44.292402   77627 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f8a98i.3r2is78gllm02lfe \
	I0729 18:31:44.292543   77627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 18:31:44.292585   77627 kubeadm.go:310] 	--control-plane 
	I0729 18:31:44.292595   77627 kubeadm.go:310] 
	I0729 18:31:44.292710   77627 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:31:44.292732   77627 kubeadm.go:310] 
	I0729 18:31:44.292836   77627 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f8a98i.3r2is78gllm02lfe \
	I0729 18:31:44.293015   77627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 18:31:44.293440   77627 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:31:44.293500   77627 cni.go:84] Creating CNI manager for ""
	I0729 18:31:44.293512   77627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:31:44.295432   77627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:31:44.296845   77627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:31:44.308178   77627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:31:44.334403   77627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:31:44.334542   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:44.334562   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-409322 minikube.k8s.io/updated_at=2024_07_29T18_31_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=embed-certs-409322 minikube.k8s.io/primary=true
	I0729 18:31:44.366345   77627 ops.go:34] apiserver oom_adj: -16
	I0729 18:31:44.537970   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:43.433884   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:45.434714   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:45.039020   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:45.538831   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:46.038700   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:46.538761   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.038725   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.538100   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:48.038309   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:48.538896   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:49.039011   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:49.538333   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:47.435067   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:49.934658   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:50.038548   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:50.538590   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:51.038131   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:51.538253   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.038599   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.538827   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:53.038077   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:53.538860   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:54.038530   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:54.538952   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:52.433783   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:54.434442   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:56.434864   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:31:55.038263   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:55.538050   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:56.038006   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:56.538079   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.038042   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.538146   77627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:31:57.696274   77627 kubeadm.go:1113] duration metric: took 13.36179604s to wait for elevateKubeSystemPrivileges
	I0729 18:31:57.696308   77627 kubeadm.go:394] duration metric: took 5m12.066483926s to StartCluster
	I0729 18:31:57.696324   77627 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:31:57.696406   77627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:31:57.698195   77627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:31:57.698479   77627 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:31:57.698592   77627 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:31:57.698674   77627 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-409322"
	I0729 18:31:57.698688   77627 addons.go:69] Setting metrics-server=true in profile "embed-certs-409322"
	I0729 18:31:57.698695   77627 addons.go:69] Setting default-storageclass=true in profile "embed-certs-409322"
	I0729 18:31:57.698714   77627 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-409322"
	I0729 18:31:57.698719   77627 addons.go:234] Setting addon metrics-server=true in "embed-certs-409322"
	W0729 18:31:57.698723   77627 addons.go:243] addon storage-provisioner should already be in state true
	W0729 18:31:57.698729   77627 addons.go:243] addon metrics-server should already be in state true
	I0729 18:31:57.698733   77627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-409322"
	I0729 18:31:57.698755   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.698676   77627 config.go:182] Loaded profile config "embed-certs-409322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:31:57.698760   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.699157   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699169   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699207   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.699170   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.699229   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.699209   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.700201   77627 out.go:177] * Verifying Kubernetes components...
	I0729 18:31:57.701577   77627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:31:57.715130   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44873
	I0729 18:31:57.715156   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I0729 18:31:57.715708   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.715759   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.716320   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.716329   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.716344   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.716345   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.716666   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.716672   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.716868   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.717251   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.717283   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.717715   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41041
	I0729 18:31:57.718172   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.718684   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.718709   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.719111   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.719630   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.719670   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.720815   77627 addons.go:234] Setting addon default-storageclass=true in "embed-certs-409322"
	W0729 18:31:57.720839   77627 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:31:57.720870   77627 host.go:66] Checking if "embed-certs-409322" exists ...
	I0729 18:31:57.721233   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.721264   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.733757   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0729 18:31:57.734325   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.735372   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.735397   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.735736   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.735928   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.735939   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0729 18:31:57.736244   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.736923   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.736942   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.737318   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.737664   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.739761   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.740354   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.741103   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
	I0729 18:31:57.741489   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.741979   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.741999   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.742296   77627 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:31:57.742348   77627 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:31:57.742400   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.743411   77627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:31:57.743443   77627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:31:57.743498   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:31:57.743515   77627 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:31:57.743537   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.743682   77627 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:31:57.743697   77627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:31:57.743711   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.748331   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.748743   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.748759   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.748941   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.748986   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.749110   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.749290   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.749423   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.749638   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.749650   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.749671   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.749834   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.749940   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.750051   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.760794   77627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I0729 18:31:57.761136   77627 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:31:57.761574   77627 main.go:141] libmachine: Using API Version  1
	I0729 18:31:57.761585   77627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:31:57.761954   77627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:31:57.762133   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetState
	I0729 18:31:57.764344   77627 main.go:141] libmachine: (embed-certs-409322) Calling .DriverName
	I0729 18:31:57.764532   77627 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:31:57.764541   77627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:31:57.764555   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHHostname
	I0729 18:31:57.767111   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.767485   77627 main.go:141] libmachine: (embed-certs-409322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9f:57", ip: ""} in network mk-embed-certs-409322: {Iface:virbr1 ExpiryTime:2024-07-29 19:26:31 +0000 UTC Type:0 Mac:52:54:00:22:9f:57 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-409322 Clientid:01:52:54:00:22:9f:57}
	I0729 18:31:57.767498   77627 main.go:141] libmachine: (embed-certs-409322) DBG | domain embed-certs-409322 has defined IP address 192.168.39.58 and MAC address 52:54:00:22:9f:57 in network mk-embed-certs-409322
	I0729 18:31:57.767625   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHPort
	I0729 18:31:57.767763   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHKeyPath
	I0729 18:31:57.767875   77627 main.go:141] libmachine: (embed-certs-409322) Calling .GetSSHUsername
	I0729 18:31:57.768004   77627 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/embed-certs-409322/id_rsa Username:docker}
	I0729 18:31:57.965911   77627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:31:57.986557   77627 node_ready.go:35] waiting up to 6m0s for node "embed-certs-409322" to be "Ready" ...
	I0729 18:31:57.995790   77627 node_ready.go:49] node "embed-certs-409322" has status "Ready":"True"
	I0729 18:31:57.995809   77627 node_ready.go:38] duration metric: took 9.222398ms for node "embed-certs-409322" to be "Ready" ...
	I0729 18:31:57.995817   77627 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:31:58.003516   77627 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace to be "Ready" ...
	I0729 18:31:58.047522   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:31:58.053274   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:31:58.053290   77627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:31:58.074101   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:31:58.074127   77627 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:31:58.088159   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:31:58.097491   77627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:31:58.097518   77627 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:31:58.125335   77627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:31:58.628396   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628425   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628466   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628480   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628847   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.628909   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.628918   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.628936   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.628946   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.628955   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.628914   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.628898   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.629017   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.629046   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.629268   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.629281   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.630616   77627 main.go:141] libmachine: (embed-certs-409322) DBG | Closing plugin on server side
	I0729 18:31:58.630636   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.630649   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.660029   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.660061   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.660339   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.660358   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.975389   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.975414   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.975721   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.975740   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.975750   77627 main.go:141] libmachine: Making call to close driver server
	I0729 18:31:58.975760   77627 main.go:141] libmachine: (embed-certs-409322) Calling .Close
	I0729 18:31:58.976034   77627 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:31:58.976051   77627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:31:58.976063   77627 addons.go:475] Verifying addon metrics-server=true in "embed-certs-409322"
	I0729 18:31:58.978172   77627 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 18:31:58.979568   77627 addons.go:510] duration metric: took 1.280977366s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 18:31:58.935700   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:00.935984   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:00.009825   77627 pod_ready.go:92] pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:00.009846   77627 pod_ready.go:81] duration metric: took 2.006300447s for pod "coredns-7db6d8ff4d-wpnfg" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:00.009855   77627 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:02.016463   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:04.515885   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:03.432654   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:05.434708   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:06.517308   77627 pod_ready.go:102] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:09.016256   77627 pod_ready.go:92] pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.016276   77627 pod_ready.go:81] duration metric: took 9.006414116s for pod "coredns-7db6d8ff4d-wztpj" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.016287   77627 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.021639   77627 pod_ready.go:92] pod "etcd-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.021661   77627 pod_ready.go:81] duration metric: took 5.365088ms for pod "etcd-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.021672   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.026599   77627 pod_ready.go:92] pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.026618   77627 pod_ready.go:81] duration metric: took 4.939458ms for pod "kube-apiserver-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.026629   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.031994   77627 pod_ready.go:92] pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.032009   77627 pod_ready.go:81] duration metric: took 5.37307ms for pod "kube-controller-manager-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.032020   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kxf5z" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.036180   77627 pod_ready.go:92] pod "kube-proxy-kxf5z" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.036196   77627 pod_ready.go:81] duration metric: took 4.16934ms for pod "kube-proxy-kxf5z" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.036205   77627 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.414950   77627 pod_ready.go:92] pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace has status "Ready":"True"
	I0729 18:32:09.414973   77627 pod_ready.go:81] duration metric: took 378.76116ms for pod "kube-scheduler-embed-certs-409322" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:09.414981   77627 pod_ready.go:38] duration metric: took 11.419116871s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:09.414995   77627 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:32:09.415042   77627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:32:09.434210   77627 api_server.go:72] duration metric: took 11.735691998s to wait for apiserver process to appear ...
	I0729 18:32:09.434240   77627 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:32:09.434260   77627 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0729 18:32:09.439755   77627 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I0729 18:32:09.440612   77627 api_server.go:141] control plane version: v1.30.3
	I0729 18:32:09.440631   77627 api_server.go:131] duration metric: took 6.382802ms to wait for apiserver health ...
	I0729 18:32:09.440640   77627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:32:09.617533   77627 system_pods.go:59] 9 kube-system pods found
	I0729 18:32:09.617564   77627 system_pods.go:61] "coredns-7db6d8ff4d-wpnfg" [687cbc8f-370a-4b72-bc1c-6ae36efe890e] Running
	I0729 18:32:09.617569   77627 system_pods.go:61] "coredns-7db6d8ff4d-wztpj" [1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7] Running
	I0729 18:32:09.617572   77627 system_pods.go:61] "etcd-embed-certs-409322" [68de54c3-7d47-4e79-a064-08b013b1d910] Running
	I0729 18:32:09.617575   77627 system_pods.go:61] "kube-apiserver-embed-certs-409322" [dc1a0568-ef7c-493f-91fb-7438456daf6d] Running
	I0729 18:32:09.617579   77627 system_pods.go:61] "kube-controller-manager-embed-certs-409322" [da715e8c-2437-487b-b4e0-c93af2f079f7] Running
	I0729 18:32:09.617582   77627 system_pods.go:61] "kube-proxy-kxf5z" [74ed1812-b3bf-429d-b8f1-bdccb3415fb5] Running
	I0729 18:32:09.617584   77627 system_pods.go:61] "kube-scheduler-embed-certs-409322" [188cf21a-9a8a-45de-9a91-9e593626ce6d] Running
	I0729 18:32:09.617591   77627 system_pods.go:61] "metrics-server-569cc877fc-6q4nl" [57dc61cc-7490-49e5-9d03-c81aa5d25aea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:32:09.617596   77627 system_pods.go:61] "storage-provisioner" [b0b1e31d-9b5c-4e82-aea7-56184832c053] Running
	I0729 18:32:09.617604   77627 system_pods.go:74] duration metric: took 176.958452ms to wait for pod list to return data ...
	I0729 18:32:09.617614   77627 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:32:09.813846   77627 default_sa.go:45] found service account: "default"
	I0729 18:32:09.813871   77627 default_sa.go:55] duration metric: took 196.249412ms for default service account to be created ...
	I0729 18:32:09.813886   77627 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:32:10.019167   77627 system_pods.go:86] 9 kube-system pods found
	I0729 18:32:10.019199   77627 system_pods.go:89] "coredns-7db6d8ff4d-wpnfg" [687cbc8f-370a-4b72-bc1c-6ae36efe890e] Running
	I0729 18:32:10.019208   77627 system_pods.go:89] "coredns-7db6d8ff4d-wztpj" [1f1a01e7-9cec-4ba8-a340-8f9ccdd728d7] Running
	I0729 18:32:10.019214   77627 system_pods.go:89] "etcd-embed-certs-409322" [68de54c3-7d47-4e79-a064-08b013b1d910] Running
	I0729 18:32:10.019220   77627 system_pods.go:89] "kube-apiserver-embed-certs-409322" [dc1a0568-ef7c-493f-91fb-7438456daf6d] Running
	I0729 18:32:10.019227   77627 system_pods.go:89] "kube-controller-manager-embed-certs-409322" [da715e8c-2437-487b-b4e0-c93af2f079f7] Running
	I0729 18:32:10.019233   77627 system_pods.go:89] "kube-proxy-kxf5z" [74ed1812-b3bf-429d-b8f1-bdccb3415fb5] Running
	I0729 18:32:10.019239   77627 system_pods.go:89] "kube-scheduler-embed-certs-409322" [188cf21a-9a8a-45de-9a91-9e593626ce6d] Running
	I0729 18:32:10.019249   77627 system_pods.go:89] "metrics-server-569cc877fc-6q4nl" [57dc61cc-7490-49e5-9d03-c81aa5d25aea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:32:10.019257   77627 system_pods.go:89] "storage-provisioner" [b0b1e31d-9b5c-4e82-aea7-56184832c053] Running
	I0729 18:32:10.019267   77627 system_pods.go:126] duration metric: took 205.375742ms to wait for k8s-apps to be running ...
	I0729 18:32:10.019278   77627 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:32:10.019326   77627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:32:10.034632   77627 system_svc.go:56] duration metric: took 15.345747ms WaitForService to wait for kubelet
	I0729 18:32:10.034659   77627 kubeadm.go:582] duration metric: took 12.336145267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:32:10.034687   77627 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:32:10.214205   77627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:32:10.214240   77627 node_conditions.go:123] node cpu capacity is 2
	I0729 18:32:10.214255   77627 node_conditions.go:105] duration metric: took 179.559492ms to run NodePressure ...
	I0729 18:32:10.214269   77627 start.go:241] waiting for startup goroutines ...
	I0729 18:32:10.214279   77627 start.go:246] waiting for cluster config update ...
	I0729 18:32:10.214297   77627 start.go:255] writing updated cluster config ...
	I0729 18:32:10.214639   77627 ssh_runner.go:195] Run: rm -f paused
	I0729 18:32:10.264858   77627 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:32:10.266718   77627 out.go:177] * Done! kubectl is now configured to use "embed-certs-409322" cluster and "default" namespace by default
	I0729 18:32:07.934519   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:10.434593   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:13.262907   78080 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:32:13.263487   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:13.263679   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:12.934686   77394 pod_ready.go:102] pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:13.928481   77394 pod_ready.go:81] duration metric: took 4m0.00080059s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" ...
	E0729 18:32:13.928509   77394 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-jcdcw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 18:32:13.928528   77394 pod_ready.go:38] duration metric: took 4m10.042077465s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:13.928554   77394 kubeadm.go:597] duration metric: took 4m18.205651497s to restartPrimaryControlPlane
	W0729 18:32:13.928623   77394 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 18:32:13.928649   77394 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:32:18.264261   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:18.264554   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:28.265190   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:28.265433   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:40.226240   77394 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.297571665s)
	I0729 18:32:40.226316   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:32:40.243407   77394 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:32:40.254946   77394 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:32:40.264608   77394 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:32:40.264631   77394 kubeadm.go:157] found existing configuration files:
	
	I0729 18:32:40.264675   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:32:40.274180   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:32:40.274231   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:32:40.283752   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:32:40.293163   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:32:40.293232   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:32:40.302533   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:32:40.311972   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:32:40.312024   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:32:40.321513   77394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:32:40.330546   77394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:32:40.330599   77394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:32:40.340190   77394 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:32:40.389517   77394 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 18:32:40.389592   77394 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:32:40.508682   77394 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:32:40.508783   77394 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:32:40.508859   77394 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 18:32:40.517673   77394 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:32:40.520623   77394 out.go:204]   - Generating certificates and keys ...
	I0729 18:32:40.520726   77394 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:32:40.520824   77394 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:32:40.520893   77394 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:32:40.520961   77394 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:32:40.521045   77394 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:32:40.521094   77394 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:32:40.521171   77394 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:32:40.521254   77394 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:32:40.521357   77394 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:32:40.521475   77394 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:32:40.521535   77394 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:32:40.521606   77394 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:32:40.615870   77394 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:32:40.837902   77394 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:32:40.924418   77394 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:32:41.068573   77394 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:32:41.287201   77394 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:32:41.287991   77394 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:32:41.293523   77394 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:32:41.295211   77394 out.go:204]   - Booting up control plane ...
	I0729 18:32:41.295329   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:32:41.295455   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:32:41.295560   77394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:32:41.317802   77394 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:32:41.324522   77394 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:32:41.324589   77394 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:32:41.463007   77394 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:32:41.463116   77394 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:32:41.982144   77394 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 519.208408ms
	I0729 18:32:41.982263   77394 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:32:46.983564   77394 kubeadm.go:310] [api-check] The API server is healthy after 5.001335599s
	I0729 18:32:46.999811   77394 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:32:47.018194   77394 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:32:47.051359   77394 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:32:47.051564   77394 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-888056 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:32:47.062615   77394 kubeadm.go:310] [bootstrap-token] Using token: a14u5x.5d4oe8yqdl9tiifc
	I0729 18:32:47.064051   77394 out.go:204]   - Configuring RBAC rules ...
	I0729 18:32:47.064187   77394 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:32:47.071856   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:32:47.084985   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:32:47.088622   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:32:47.091797   77394 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:32:47.096194   77394 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:32:47.391394   77394 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:32:47.834314   77394 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:32:48.394665   77394 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:32:48.394689   77394 kubeadm.go:310] 
	I0729 18:32:48.394763   77394 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:32:48.394797   77394 kubeadm.go:310] 
	I0729 18:32:48.394928   77394 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:32:48.394941   77394 kubeadm.go:310] 
	I0729 18:32:48.394979   77394 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:32:48.395058   77394 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:32:48.395126   77394 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:32:48.395141   77394 kubeadm.go:310] 
	I0729 18:32:48.395221   77394 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:32:48.395230   77394 kubeadm.go:310] 
	I0729 18:32:48.395297   77394 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:32:48.395306   77394 kubeadm.go:310] 
	I0729 18:32:48.395374   77394 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:32:48.395467   77394 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:32:48.395554   77394 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:32:48.395563   77394 kubeadm.go:310] 
	I0729 18:32:48.395652   77394 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:32:48.395766   77394 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:32:48.395778   77394 kubeadm.go:310] 
	I0729 18:32:48.395886   77394 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a14u5x.5d4oe8yqdl9tiifc \
	I0729 18:32:48.396030   77394 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 \
	I0729 18:32:48.396062   77394 kubeadm.go:310] 	--control-plane 
	I0729 18:32:48.396071   77394 kubeadm.go:310] 
	I0729 18:32:48.396191   77394 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:32:48.396200   77394 kubeadm.go:310] 
	I0729 18:32:48.396276   77394 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a14u5x.5d4oe8yqdl9tiifc \
	I0729 18:32:48.396393   77394 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3ad6910cc298e73358b095b8604c424739352c0e8e39705c133ba83cb50e3e37 
	I0729 18:32:48.397540   77394 kubeadm.go:310] W0729 18:32:40.358164    2949 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 18:32:48.397921   77394 kubeadm.go:310] W0729 18:32:40.359840    2949 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 18:32:48.398071   77394 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:32:48.398090   77394 cni.go:84] Creating CNI manager for ""
	I0729 18:32:48.398099   77394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:32:48.399641   77394 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:32:48.266531   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:32:48.266736   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:32:48.400846   77394 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:32:48.412594   77394 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:32:48.434792   77394 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:32:48.434872   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:48.434907   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-888056 minikube.k8s.io/updated_at=2024_07_29T18_32_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=no-preload-888056 minikube.k8s.io/primary=true
	I0729 18:32:48.672892   77394 ops.go:34] apiserver oom_adj: -16
	I0729 18:32:48.673144   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:49.173811   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:49.673775   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:50.173717   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:50.673774   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:51.174068   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:51.673565   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:52.173431   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:52.673602   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:53.173912   77394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:32:53.315565   77394 kubeadm.go:1113] duration metric: took 4.880757535s to wait for elevateKubeSystemPrivileges
	I0729 18:32:53.315609   77394 kubeadm.go:394] duration metric: took 4m57.645527986s to StartCluster
	I0729 18:32:53.315633   77394 settings.go:142] acquiring lock: {Name:mkd2c4591636cc1d19b23a0dab1807db2e7ea395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:32:53.315736   77394 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:32:53.317360   77394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/kubeconfig: {Name:mk5063f02b2a50f0dcb76d540fd89014b8974dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:32:53.317579   77394 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:32:53.317669   77394 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:32:53.317784   77394 addons.go:69] Setting storage-provisioner=true in profile "no-preload-888056"
	I0729 18:32:53.317820   77394 addons.go:234] Setting addon storage-provisioner=true in "no-preload-888056"
	I0729 18:32:53.317817   77394 addons.go:69] Setting default-storageclass=true in profile "no-preload-888056"
	W0729 18:32:53.317835   77394 addons.go:243] addon storage-provisioner should already be in state true
	I0729 18:32:53.317840   77394 config.go:182] Loaded profile config "no-preload-888056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:32:53.317836   77394 addons.go:69] Setting metrics-server=true in profile "no-preload-888056"
	I0729 18:32:53.317861   77394 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-888056"
	I0729 18:32:53.317878   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.317882   77394 addons.go:234] Setting addon metrics-server=true in "no-preload-888056"
	W0729 18:32:53.317892   77394 addons.go:243] addon metrics-server should already be in state true
	I0729 18:32:53.317927   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.318302   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318308   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318334   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.318345   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.318301   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.318441   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.319022   77394 out.go:177] * Verifying Kubernetes components...
	I0729 18:32:53.320383   77394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:32:53.335666   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0729 18:32:53.336170   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.336860   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.336896   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.337301   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.338104   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I0729 18:32:53.338137   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40655
	I0729 18:32:53.338545   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.338559   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.338595   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.338614   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.339076   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.339094   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.339163   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.339188   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.339510   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.340089   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.340126   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.340346   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.340557   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.344286   77394 addons.go:234] Setting addon default-storageclass=true in "no-preload-888056"
	W0729 18:32:53.344307   77394 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:32:53.344335   77394 host.go:66] Checking if "no-preload-888056" exists ...
	I0729 18:32:53.344702   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.344727   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.356006   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33765
	I0729 18:32:53.356613   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.357135   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.357159   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.357517   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.357604   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0729 18:32:53.357752   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.358011   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.358472   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.358490   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.358898   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.359110   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.359546   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.360493   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.361662   77394 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:32:53.362464   77394 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 18:32:53.363294   77394 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:32:53.363311   77394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:32:53.363331   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.364170   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:32:53.364182   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I0729 18:32:53.364186   77394 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:32:53.364205   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.364560   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.365040   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.365061   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.365515   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.365963   77394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:32:53.365983   77394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:32:53.367883   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.368768   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369264   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.369284   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369576   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.369591   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.369858   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.369964   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.370009   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.370102   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.370169   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.370198   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.370317   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.370344   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.382571   77394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0729 18:32:53.382940   77394 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:32:53.383311   77394 main.go:141] libmachine: Using API Version  1
	I0729 18:32:53.383336   77394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:32:53.383748   77394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:32:53.383946   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetState
	I0729 18:32:53.385570   77394 main.go:141] libmachine: (no-preload-888056) Calling .DriverName
	I0729 18:32:53.385761   77394 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:32:53.385775   77394 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:32:53.385792   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHHostname
	I0729 18:32:53.388411   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.388756   77394 main.go:141] libmachine: (no-preload-888056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b0:1a", ip: ""} in network mk-no-preload-888056: {Iface:virbr4 ExpiryTime:2024-07-29 19:17:36 +0000 UTC Type:0 Mac:52:54:00:b2:b0:1a Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:no-preload-888056 Clientid:01:52:54:00:b2:b0:1a}
	I0729 18:32:53.388774   77394 main.go:141] libmachine: (no-preload-888056) DBG | domain no-preload-888056 has defined IP address 192.168.72.80 and MAC address 52:54:00:b2:b0:1a in network mk-no-preload-888056
	I0729 18:32:53.389017   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHPort
	I0729 18:32:53.389193   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHKeyPath
	I0729 18:32:53.389350   77394 main.go:141] libmachine: (no-preload-888056) Calling .GetSSHUsername
	I0729 18:32:53.389463   77394 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/no-preload-888056/id_rsa Username:docker}
	I0729 18:32:53.585542   77394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:32:53.645556   77394 node_ready.go:35] waiting up to 6m0s for node "no-preload-888056" to be "Ready" ...
	I0729 18:32:53.657965   77394 node_ready.go:49] node "no-preload-888056" has status "Ready":"True"
	I0729 18:32:53.657997   77394 node_ready.go:38] duration metric: took 12.408834ms for node "no-preload-888056" to be "Ready" ...
	I0729 18:32:53.658010   77394 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:32:53.673068   77394 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace to be "Ready" ...
	I0729 18:32:53.724224   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:32:53.724248   77394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 18:32:53.763536   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:32:53.774123   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:32:53.812615   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:32:53.812639   77394 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:32:53.945274   77394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:32:53.945303   77394 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:32:54.107180   77394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:32:54.184354   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.184379   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.184699   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.184748   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.184762   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.184776   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.184786   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.185015   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.185043   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.185077   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.244759   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.244781   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.245108   77394 main.go:141] libmachine: (no-preload-888056) DBG | Closing plugin on server side
	I0729 18:32:54.245156   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.245169   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.782604   77394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.008443119s)
	I0729 18:32:54.782663   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.782676   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.782990   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.783010   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.783020   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.783028   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.783265   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.783283   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946051   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.946074   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.946396   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.946418   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946430   77394 main.go:141] libmachine: Making call to close driver server
	I0729 18:32:54.946439   77394 main.go:141] libmachine: (no-preload-888056) Calling .Close
	I0729 18:32:54.946680   77394 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:32:54.946698   77394 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:32:54.946710   77394 addons.go:475] Verifying addon metrics-server=true in "no-preload-888056"
	I0729 18:32:54.948362   77394 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 18:32:54.949821   77394 addons.go:510] duration metric: took 1.632153415s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 18:32:55.679655   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:32:57.680175   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:33:00.179877   77394 pod_ready.go:102] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"False"
	I0729 18:33:01.180068   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.180094   77394 pod_ready.go:81] duration metric: took 7.506992362s for pod "coredns-5cfdc65f69-bbh6c" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.180106   77394 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.185742   77394 pod_ready.go:92] pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.185760   77394 pod_ready.go:81] duration metric: took 5.647157ms for pod "coredns-5cfdc65f69-j9ddw" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.185769   77394 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.190056   77394 pod_ready.go:92] pod "etcd-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.190077   77394 pod_ready.go:81] duration metric: took 4.30181ms for pod "etcd-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.190085   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.194255   77394 pod_ready.go:92] pod "kube-apiserver-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.194273   77394 pod_ready.go:81] duration metric: took 4.182006ms for pod "kube-apiserver-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.194284   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.199056   77394 pod_ready.go:92] pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.199072   77394 pod_ready.go:81] duration metric: took 4.779158ms for pod "kube-controller-manager-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.199081   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-94ff9" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.578279   77394 pod_ready.go:92] pod "kube-proxy-94ff9" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:01.578299   77394 pod_ready.go:81] duration metric: took 379.211109ms for pod "kube-proxy-94ff9" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:01.578308   77394 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:02.378184   77394 pod_ready.go:92] pod "kube-scheduler-no-preload-888056" in "kube-system" namespace has status "Ready":"True"
	I0729 18:33:02.378205   77394 pod_ready.go:81] duration metric: took 799.890202ms for pod "kube-scheduler-no-preload-888056" in "kube-system" namespace to be "Ready" ...
	I0729 18:33:02.378212   77394 pod_ready.go:38] duration metric: took 8.720189182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:33:02.378226   77394 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:33:02.378282   77394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:33:02.396023   77394 api_server.go:72] duration metric: took 9.07841179s to wait for apiserver process to appear ...
	I0729 18:33:02.396050   77394 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:33:02.396070   77394 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0729 18:33:02.403736   77394 api_server.go:279] https://192.168.72.80:8443/healthz returned 200:
	ok
	I0729 18:33:02.404828   77394 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 18:33:02.404850   77394 api_server.go:131] duration metric: took 8.793481ms to wait for apiserver health ...
	I0729 18:33:02.404858   77394 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:33:02.580656   77394 system_pods.go:59] 9 kube-system pods found
	I0729 18:33:02.580683   77394 system_pods.go:61] "coredns-5cfdc65f69-bbh6c" [66b43af3-78eb-437f-81d7-eedb4cc34349] Running
	I0729 18:33:02.580687   77394 system_pods.go:61] "coredns-5cfdc65f69-j9ddw" [679f8750-86aa-4e00-8291-6996b54b1930] Running
	I0729 18:33:02.580691   77394 system_pods.go:61] "etcd-no-preload-888056" [abcd648d-659a-4f02-a769-f2222eaac945] Running
	I0729 18:33:02.580695   77394 system_pods.go:61] "kube-apiserver-no-preload-888056" [99a48803-06b1-44a6-a0cc-f28f2ba7235f] Running
	I0729 18:33:02.580699   77394 system_pods.go:61] "kube-controller-manager-no-preload-888056" [6bb3d64c-9fef-41ee-a68d-170fac01dec5] Running
	I0729 18:33:02.580702   77394 system_pods.go:61] "kube-proxy-94ff9" [dd06899e-3d54-4b71-bda6-f8c6d06ce100] Running
	I0729 18:33:02.580704   77394 system_pods.go:61] "kube-scheduler-no-preload-888056" [a1b60226-df5e-45ce-8382-a8d277278129] Running
	I0729 18:33:02.580710   77394 system_pods.go:61] "metrics-server-78fcd8795b-9qqmj" [45bbbaf3-cf3e-4db1-9eec-693425bc5dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:33:02.580714   77394 system_pods.go:61] "storage-provisioner" [0aacb67c-abea-47fb-a2f1-f1245e68599a] Running
	I0729 18:33:02.580721   77394 system_pods.go:74] duration metric: took 175.857868ms to wait for pod list to return data ...
	I0729 18:33:02.580728   77394 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:33:02.778962   77394 default_sa.go:45] found service account: "default"
	I0729 18:33:02.778987   77394 default_sa.go:55] duration metric: took 198.250326ms for default service account to be created ...
	I0729 18:33:02.778995   77394 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:33:02.981123   77394 system_pods.go:86] 9 kube-system pods found
	I0729 18:33:02.981159   77394 system_pods.go:89] "coredns-5cfdc65f69-bbh6c" [66b43af3-78eb-437f-81d7-eedb4cc34349] Running
	I0729 18:33:02.981166   77394 system_pods.go:89] "coredns-5cfdc65f69-j9ddw" [679f8750-86aa-4e00-8291-6996b54b1930] Running
	I0729 18:33:02.981175   77394 system_pods.go:89] "etcd-no-preload-888056" [abcd648d-659a-4f02-a769-f2222eaac945] Running
	I0729 18:33:02.981181   77394 system_pods.go:89] "kube-apiserver-no-preload-888056" [99a48803-06b1-44a6-a0cc-f28f2ba7235f] Running
	I0729 18:33:02.981186   77394 system_pods.go:89] "kube-controller-manager-no-preload-888056" [6bb3d64c-9fef-41ee-a68d-170fac01dec5] Running
	I0729 18:33:02.981190   77394 system_pods.go:89] "kube-proxy-94ff9" [dd06899e-3d54-4b71-bda6-f8c6d06ce100] Running
	I0729 18:33:02.981196   77394 system_pods.go:89] "kube-scheduler-no-preload-888056" [a1b60226-df5e-45ce-8382-a8d277278129] Running
	I0729 18:33:02.981206   77394 system_pods.go:89] "metrics-server-78fcd8795b-9qqmj" [45bbbaf3-cf3e-4db1-9eec-693425bc5dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:33:02.981214   77394 system_pods.go:89] "storage-provisioner" [0aacb67c-abea-47fb-a2f1-f1245e68599a] Running
	I0729 18:33:02.981228   77394 system_pods.go:126] duration metric: took 202.226569ms to wait for k8s-apps to be running ...
	I0729 18:33:02.981239   77394 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:33:02.981290   77394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:33:02.999134   77394 system_svc.go:56] duration metric: took 17.878004ms WaitForService to wait for kubelet
	I0729 18:33:02.999169   77394 kubeadm.go:582] duration metric: took 9.681562891s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:33:02.999187   77394 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:33:03.179246   77394 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:33:03.179274   77394 node_conditions.go:123] node cpu capacity is 2
	I0729 18:33:03.179286   77394 node_conditions.go:105] duration metric: took 180.093491ms to run NodePressure ...
	I0729 18:33:03.179312   77394 start.go:241] waiting for startup goroutines ...
	I0729 18:33:03.179322   77394 start.go:246] waiting for cluster config update ...
	I0729 18:33:03.179344   77394 start.go:255] writing updated cluster config ...
	I0729 18:33:03.179658   77394 ssh_runner.go:195] Run: rm -f paused
	I0729 18:33:03.228664   77394 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 18:33:03.230706   77394 out.go:177] * Done! kubectl is now configured to use "no-preload-888056" cluster and "default" namespace by default
	I0729 18:33:28.269122   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:33:28.269375   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:33:28.269399   78080 kubeadm.go:310] 
	I0729 18:33:28.269433   78080 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:33:28.269471   78080 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:33:28.269480   78080 kubeadm.go:310] 
	I0729 18:33:28.269508   78080 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:33:28.269541   78080 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:33:28.269686   78080 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:33:28.269698   78080 kubeadm.go:310] 
	I0729 18:33:28.269846   78080 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:33:28.269902   78080 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:33:28.269946   78080 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:33:28.269969   78080 kubeadm.go:310] 
	I0729 18:33:28.270132   78080 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:33:28.270246   78080 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:33:28.270258   78080 kubeadm.go:310] 
	I0729 18:33:28.270434   78080 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:33:28.270567   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:33:28.270674   78080 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:33:28.270774   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:33:28.270784   78080 kubeadm.go:310] 
	I0729 18:33:28.271347   78080 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:33:28.271428   78080 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:33:28.271503   78080 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 18:33:28.271650   78080 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 18:33:28.271713   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:33:28.743675   78080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:33:28.759228   78080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:33:28.768522   78080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:33:28.768546   78080 kubeadm.go:157] found existing configuration files:
	
	I0729 18:33:28.768593   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:33:28.777423   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:33:28.777481   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:33:28.786450   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:33:28.795335   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:33:28.795386   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:33:28.804519   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:33:28.813137   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:33:28.813193   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:33:28.822053   78080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:33:28.830463   78080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:33:28.830513   78080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:33:28.839818   78080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:33:29.066010   78080 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:35:25.197434   78080 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:35:25.197566   78080 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 18:35:25.199476   78080 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:35:25.199554   78080 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:35:25.199667   78080 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:35:25.199800   78080 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:35:25.199937   78080 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:35:25.200054   78080 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:35:25.201801   78080 out.go:204]   - Generating certificates and keys ...
	I0729 18:35:25.201875   78080 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:35:25.201944   78080 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:35:25.202073   78080 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:35:25.202136   78080 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:35:25.202231   78080 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:35:25.202287   78080 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:35:25.202339   78080 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:35:25.202426   78080 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:35:25.202492   78080 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:35:25.202560   78080 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:35:25.202603   78080 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:35:25.202692   78080 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:35:25.202779   78080 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:35:25.202863   78080 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:35:25.202962   78080 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:35:25.203070   78080 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:35:25.203213   78080 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:35:25.203289   78080 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:35:25.203323   78080 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:35:25.203381   78080 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:35:25.204837   78080 out.go:204]   - Booting up control plane ...
	I0729 18:35:25.204920   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:35:25.204985   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:35:25.205053   78080 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:35:25.205146   78080 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:35:25.205274   78080 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:35:25.205316   78080 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:35:25.205379   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.205591   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.205658   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.205828   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.205926   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206142   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206204   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206411   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206488   78080 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:35:25.206683   78080 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:35:25.206698   78080 kubeadm.go:310] 
	I0729 18:35:25.206755   78080 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:35:25.206817   78080 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:35:25.206827   78080 kubeadm.go:310] 
	I0729 18:35:25.206860   78080 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:35:25.206890   78080 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:35:25.206975   78080 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:35:25.206985   78080 kubeadm.go:310] 
	I0729 18:35:25.207099   78080 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:35:25.207134   78080 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:35:25.207167   78080 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:35:25.207177   78080 kubeadm.go:310] 
	I0729 18:35:25.207289   78080 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:35:25.207403   78080 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:35:25.207412   78080 kubeadm.go:310] 
	I0729 18:35:25.207532   78080 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:35:25.207640   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:35:25.207754   78080 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:35:25.207821   78080 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:35:25.207854   78080 kubeadm.go:310] 
	I0729 18:35:25.207886   78080 kubeadm.go:394] duration metric: took 7m57.080498205s to StartCluster
	I0729 18:35:25.207923   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:35:25.207983   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:35:25.251803   78080 cri.go:89] found id: ""
	I0729 18:35:25.251841   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.251852   78080 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:35:25.251859   78080 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:35:25.251920   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:35:25.287842   78080 cri.go:89] found id: ""
	I0729 18:35:25.287877   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.287895   78080 logs.go:278] No container was found matching "etcd"
	I0729 18:35:25.287903   78080 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:35:25.287967   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:35:25.324546   78080 cri.go:89] found id: ""
	I0729 18:35:25.324573   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.324582   78080 logs.go:278] No container was found matching "coredns"
	I0729 18:35:25.324588   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:35:25.324634   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:35:25.375723   78080 cri.go:89] found id: ""
	I0729 18:35:25.375746   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.375753   78080 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:35:25.375759   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:35:25.375812   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:35:25.412580   78080 cri.go:89] found id: ""
	I0729 18:35:25.412604   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.412612   78080 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:35:25.412617   78080 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:35:25.412664   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:35:25.449360   78080 cri.go:89] found id: ""
	I0729 18:35:25.449397   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.449406   78080 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:35:25.449413   78080 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:35:25.449464   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:35:25.485655   78080 cri.go:89] found id: ""
	I0729 18:35:25.485687   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.485698   78080 logs.go:278] No container was found matching "kindnet"
	I0729 18:35:25.485705   78080 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 18:35:25.485769   78080 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 18:35:25.521752   78080 cri.go:89] found id: ""
	I0729 18:35:25.521776   78080 logs.go:276] 0 containers: []
	W0729 18:35:25.521783   78080 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 18:35:25.521792   78080 logs.go:123] Gathering logs for container status ...
	I0729 18:35:25.521808   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:35:25.562894   78080 logs.go:123] Gathering logs for kubelet ...
	I0729 18:35:25.562922   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:35:25.623879   78080 logs.go:123] Gathering logs for dmesg ...
	I0729 18:35:25.623912   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:35:25.647315   78080 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:35:25.647341   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:35:25.744827   78080 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:35:25.744850   78080 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:35:25.744865   78080 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0729 18:35:25.849394   78080 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 18:35:25.849445   78080 out.go:239] * 
	W0729 18:35:25.849520   78080 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:35:25.849558   78080 out.go:239] * 
	W0729 18:35:25.850438   78080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:35:25.853770   78080 out.go:177] 
	W0729 18:35:25.854982   78080 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:35:25.855035   78080 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 18:35:25.855060   78080 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 18:35:25.856444   78080 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.426622605Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278787426592972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecfb70ea-05a5-4e77-bb74-6cf6c0929fa2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.427172754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60e3f62b-7a77-4c32-a9bb-f64688effbe1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.427237405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60e3f62b-7a77-4c32-a9bb-f64688effbe1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.427277679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=60e3f62b-7a77-4c32-a9bb-f64688effbe1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.457396241Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98b0f37d-a8fb-4f95-a5e1-3efc3b835c1a name=/runtime.v1.RuntimeService/Version
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.457492723Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98b0f37d-a8fb-4f95-a5e1-3efc3b835c1a name=/runtime.v1.RuntimeService/Version
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.458917513Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c81381ef-a6f5-4991-8c03-c647a21a803d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.459289755Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278787459272898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c81381ef-a6f5-4991-8c03-c647a21a803d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.459976326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c368ef1-4917-4862-a5e9-c67f947ff177 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.460052873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c368ef1-4917-4862-a5e9-c67f947ff177 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.460089510Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3c368ef1-4917-4862-a5e9-c67f947ff177 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.492473740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78a00e22-31aa-4154-b4e2-75c1ed79289f name=/runtime.v1.RuntimeService/Version
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.492545364Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78a00e22-31aa-4154-b4e2-75c1ed79289f name=/runtime.v1.RuntimeService/Version
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.493401480Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae8b3fed-f88b-468d-8bee-0f0e8cf46418 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.493855105Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278787493834798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae8b3fed-f88b-468d-8bee-0f0e8cf46418 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.494470688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cba7dffd-0a8e-4543-b5ac-84a58146df00 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.494535657Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cba7dffd-0a8e-4543-b5ac-84a58146df00 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.494572218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cba7dffd-0a8e-4543-b5ac-84a58146df00 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.527149949Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=77dce4a0-9c56-4307-9d40-b28cc3db9899 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.527238290Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=77dce4a0-9c56-4307-9d40-b28cc3db9899 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.528553644Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76144926-e951-4bb8-9a07-6ceaea976000 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.529115796Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278787529086541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76144926-e951-4bb8-9a07-6ceaea976000 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.529643296Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e9cd510-ebc4-442b-a80c-7b4613fa0111 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.529711998Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e9cd510-ebc4-442b-a80c-7b4613fa0111 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:27 old-k8s-version-386663 crio[651]: time="2024-07-29 18:46:27.529832798Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9e9cd510-ebc4-442b-a80c-7b4613fa0111 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 18:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053138] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049545] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.032104] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.544033] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.650966] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.461872] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.060757] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073737] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.211657] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.138817] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.279940] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.393435] systemd-fstab-generator[838]: Ignoring "noauto" option for root device
	[  +0.066263] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.863430] systemd-fstab-generator[963]: Ignoring "noauto" option for root device
	[ +12.812099] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 18:31] systemd-fstab-generator[5034]: Ignoring "noauto" option for root device
	[Jul29 18:33] systemd-fstab-generator[5313]: Ignoring "noauto" option for root device
	[  +0.068948] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:46:27 up 19 min,  0 users,  load average: 0.10, 0.06, 0.05
	Linux old-k8s-version-386663 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6764]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000751230, 0xc000b54c20)
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6764]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6764]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6764]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6764]: goroutine 157 [syscall]:
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6764]: syscall.Syscall6(0xe8, 0xc, 0xc000d0fb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6764]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6764]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xc, 0xc000d0fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6764]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6764]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000a298e0, 0x0, 0x0, 0x0)
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6764]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6764]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000b6e500)
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6764]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6764]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6764]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Jul 29 18:46:26 old-k8s-version-386663 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 29 18:46:26 old-k8s-version-386663 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 29 18:46:26 old-k8s-version-386663 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 135.
	Jul 29 18:46:26 old-k8s-version-386663 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 29 18:46:26 old-k8s-version-386663 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6790]: I0729 18:46:26.960056    6790 server.go:416] Version: v1.20.0
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6790]: I0729 18:46:26.960442    6790 server.go:837] Client rotation is on, will bootstrap in background
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6790]: I0729 18:46:26.963347    6790 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6790]: I0729 18:46:26.964832    6790 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jul 29 18:46:26 old-k8s-version-386663 kubelet[6790]: W0729 18:46:26.964870    6790 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386663 -n old-k8s-version-386663
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386663 -n old-k8s-version-386663: exit status 2 (234.931351ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-386663" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (116.53s)

                                                
                                    

Test pass (256/326)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.21
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 10.49
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 5.56
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.55
31 TestOffline 122.82
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 136.84
40 TestAddons/serial/GCPAuth/Namespaces 0.14
42 TestAddons/parallel/Registry 15.59
44 TestAddons/parallel/InspektorGadget 11.88
46 TestAddons/parallel/HelmTiller 10.87
48 TestAddons/parallel/CSI 52.29
49 TestAddons/parallel/Headlamp 17.6
50 TestAddons/parallel/CloudSpanner 5.53
51 TestAddons/parallel/LocalPath 56.05
52 TestAddons/parallel/NvidiaDevicePlugin 6.5
53 TestAddons/parallel/Yakd 10.87
55 TestCertOptions 90.38
56 TestCertExpiration 310.39
58 TestForceSystemdFlag 52.9
59 TestForceSystemdEnv 47.32
61 TestKVMDriverInstallOrUpdate 1.23
65 TestErrorSpam/setup 42.92
66 TestErrorSpam/start 0.32
67 TestErrorSpam/status 0.72
68 TestErrorSpam/pause 1.55
69 TestErrorSpam/unpause 1.53
70 TestErrorSpam/stop 4.69
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 96.9
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 43.2
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.08
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.09
82 TestFunctional/serial/CacheCmd/cache/add_local 1.05
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.57
87 TestFunctional/serial/CacheCmd/cache/delete 0.08
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
90 TestFunctional/serial/ExtraConfig 51.13
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.37
93 TestFunctional/serial/LogsFileCmd 1.48
94 TestFunctional/serial/InvalidService 4.35
96 TestFunctional/parallel/ConfigCmd 0.33
97 TestFunctional/parallel/DashboardCmd 14.41
98 TestFunctional/parallel/DryRun 0.24
99 TestFunctional/parallel/InternationalLanguage 0.15
100 TestFunctional/parallel/StatusCmd 1.14
104 TestFunctional/parallel/ServiceCmdConnect 7.84
105 TestFunctional/parallel/AddonsCmd 0.35
106 TestFunctional/parallel/PersistentVolumeClaim 29.4
108 TestFunctional/parallel/SSHCmd 0.46
109 TestFunctional/parallel/CpCmd 1.45
110 TestFunctional/parallel/MySQL 21.22
111 TestFunctional/parallel/FileSync 0.25
112 TestFunctional/parallel/CertSync 1.4
116 TestFunctional/parallel/NodeLabels 0.07
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
120 TestFunctional/parallel/License 0.19
121 TestFunctional/parallel/ServiceCmd/DeployApp 11.21
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
123 TestFunctional/parallel/ProfileCmd/profile_list 0.37
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
126 TestFunctional/parallel/Version/short 0.04
127 TestFunctional/parallel/Version/components 0.64
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
132 TestFunctional/parallel/ImageCommands/ImageBuild 2.78
133 TestFunctional/parallel/ImageCommands/Setup 0.47
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
137 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.54
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.46
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.54
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.83
141 TestFunctional/parallel/ServiceCmd/List 0.27
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
144 TestFunctional/parallel/ServiceCmd/Format 0.33
145 TestFunctional/parallel/ServiceCmd/URL 0.47
155 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
156 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.13
157 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
158 TestFunctional/parallel/MountCmd/specific-port 1.54
159 TestFunctional/parallel/MountCmd/VerifyCleanup 1.19
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.01
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 208.93
167 TestMultiControlPlane/serial/DeployApp 6.05
168 TestMultiControlPlane/serial/PingHostFromPods 1.19
169 TestMultiControlPlane/serial/AddWorkerNode 56.28
170 TestMultiControlPlane/serial/NodeLabels 0.06
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
172 TestMultiControlPlane/serial/CopyFile 12.4
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.38
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.13
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
181 TestMultiControlPlane/serial/RestartCluster 353.85
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
183 TestMultiControlPlane/serial/AddSecondaryNode 74.7
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
188 TestJSONOutput/start/Command 95.05
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.69
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.62
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.31
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.18
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 90.6
220 TestMountStart/serial/StartWithMountFirst 26.94
221 TestMountStart/serial/VerifyMountFirst 0.35
222 TestMountStart/serial/StartWithMountSecond 27.12
223 TestMountStart/serial/VerifyMountSecond 0.36
224 TestMountStart/serial/DeleteFirst 0.67
225 TestMountStart/serial/VerifyMountPostDelete 0.36
226 TestMountStart/serial/Stop 1.27
227 TestMountStart/serial/RestartStopped 24.74
228 TestMountStart/serial/VerifyMountPostStop 0.36
231 TestMultiNode/serial/FreshStart2Nodes 119.04
232 TestMultiNode/serial/DeployApp2Nodes 3.05
233 TestMultiNode/serial/PingHostFrom2Pods 0.76
234 TestMultiNode/serial/AddNode 50.05
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.21
237 TestMultiNode/serial/CopyFile 6.93
238 TestMultiNode/serial/StopNode 2.29
239 TestMultiNode/serial/StartAfterStop 37.9
241 TestMultiNode/serial/DeleteNode 2.27
243 TestMultiNode/serial/RestartMultiNode 180.82
244 TestMultiNode/serial/ValidateNameConflict 45.67
251 TestScheduledStopUnix 116.03
255 TestRunningBinaryUpgrade 177.63
259 TestStoppedBinaryUpgrade/Setup 0.46
263 TestStoppedBinaryUpgrade/Upgrade 146.04
268 TestNetworkPlugins/group/false 3.08
280 TestPause/serial/Start 59.16
281 TestStoppedBinaryUpgrade/MinikubeLogs 0.81
283 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
284 TestNoKubernetes/serial/StartWithK8s 52.08
285 TestPause/serial/SecondStartNoReconfiguration 37.23
286 TestNoKubernetes/serial/StartWithStopK8s 29.65
287 TestPause/serial/Pause 0.76
288 TestPause/serial/VerifyStatus 0.24
289 TestPause/serial/Unpause 0.79
290 TestPause/serial/PauseAgain 0.93
291 TestPause/serial/DeletePaused 1.06
292 TestPause/serial/VerifyDeletedResources 0.48
293 TestNoKubernetes/serial/Start 48.57
294 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
295 TestNoKubernetes/serial/ProfileList 0.49
296 TestNoKubernetes/serial/Stop 1.27
297 TestNoKubernetes/serial/StartNoArgs 88.66
298 TestNetworkPlugins/group/auto/Start 71.22
299 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
300 TestNetworkPlugins/group/kindnet/Start 105.06
301 TestNetworkPlugins/group/calico/Start 105.97
302 TestNetworkPlugins/group/auto/KubeletFlags 0.21
303 TestNetworkPlugins/group/auto/NetCatPod 11.21
304 TestNetworkPlugins/group/auto/DNS 33.02
305 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
306 TestNetworkPlugins/group/auto/Localhost 0.14
307 TestNetworkPlugins/group/auto/HairPin 0.13
308 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
309 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
310 TestNetworkPlugins/group/kindnet/DNS 0.22
311 TestNetworkPlugins/group/kindnet/Localhost 0.18
312 TestNetworkPlugins/group/kindnet/HairPin 0.16
313 TestNetworkPlugins/group/custom-flannel/Start 82
314 TestNetworkPlugins/group/enable-default-cni/Start 124.03
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/KubeletFlags 0.2
317 TestNetworkPlugins/group/calico/NetCatPod 11.2
318 TestNetworkPlugins/group/calico/DNS 0.19
319 TestNetworkPlugins/group/calico/Localhost 0.15
320 TestNetworkPlugins/group/calico/HairPin 0.14
321 TestNetworkPlugins/group/flannel/Start 88.05
322 TestNetworkPlugins/group/bridge/Start 139.75
323 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
324 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.24
325 TestNetworkPlugins/group/custom-flannel/DNS 0.25
326 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
327 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.24
332 TestNetworkPlugins/group/flannel/ControllerPod 6.01
333 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
334 TestNetworkPlugins/group/flannel/NetCatPod 10.21
335 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
336 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
337 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
338 TestNetworkPlugins/group/flannel/DNS 0.29
339 TestNetworkPlugins/group/flannel/Localhost 0.16
340 TestNetworkPlugins/group/flannel/HairPin 0.16
342 TestStartStop/group/no-preload/serial/FirstStart 100.94
344 TestStartStop/group/embed-certs/serial/FirstStart 121.13
345 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
346 TestNetworkPlugins/group/bridge/NetCatPod 14.26
347 TestNetworkPlugins/group/bridge/DNS 0.18
348 TestNetworkPlugins/group/bridge/Localhost 0.14
349 TestNetworkPlugins/group/bridge/HairPin 0.15
351 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 98.06
352 TestStartStop/group/no-preload/serial/DeployApp 8.29
353 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
355 TestStartStop/group/embed-certs/serial/DeployApp 8.27
356 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.92
358 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.88
364 TestStartStop/group/no-preload/serial/SecondStart 681.13
366 TestStartStop/group/embed-certs/serial/SecondStart 600.77
368 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 540.31
369 TestStartStop/group/old-k8s-version/serial/Stop 2.28
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
381 TestStartStop/group/newest-cni/serial/FirstStart 47.27
382 TestStartStop/group/newest-cni/serial/DeployApp 0
383 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
384 TestStartStop/group/newest-cni/serial/Stop 10.42
385 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
386 TestStartStop/group/newest-cni/serial/SecondStart 72.13
387 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
390 TestStartStop/group/newest-cni/serial/Pause 2.31
x
+
TestDownloadOnly/v1.20.0/json-events (11.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-488789 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-488789 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.209189934s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-488789
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-488789: exit status 85 (56.139089ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-488789 | jenkins | v1.33.1 | 29 Jul 24 16:55 UTC |          |
	|         | -p download-only-488789        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 16:55:43
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 16:55:43.402533   18405 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:43.402794   18405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:43.402804   18405 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:43.402808   18405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:43.403038   18405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	W0729 16:55:43.403207   18405 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19345-11206/.minikube/config/config.json: open /home/jenkins/minikube-integration/19345-11206/.minikube/config/config.json: no such file or directory
	I0729 16:55:43.403801   18405 out.go:298] Setting JSON to true
	I0729 16:55:43.404700   18405 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2295,"bootTime":1722269848,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 16:55:43.404754   18405 start.go:139] virtualization: kvm guest
	I0729 16:55:43.407055   18405 out.go:97] [download-only-488789] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0729 16:55:43.407160   18405 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 16:55:43.407230   18405 notify.go:220] Checking for updates...
	I0729 16:55:43.408402   18405 out.go:169] MINIKUBE_LOCATION=19345
	I0729 16:55:43.409773   18405 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:55:43.411053   18405 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 16:55:43.412164   18405 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 16:55:43.413445   18405 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 16:55:43.415750   18405 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 16:55:43.415982   18405 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:55:43.878830   18405 out.go:97] Using the kvm2 driver based on user configuration
	I0729 16:55:43.878857   18405 start.go:297] selected driver: kvm2
	I0729 16:55:43.878863   18405 start.go:901] validating driver "kvm2" against <nil>
	I0729 16:55:43.879191   18405 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:43.879304   18405 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 16:55:43.893479   18405 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 16:55:43.893525   18405 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:55:43.893984   18405 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 16:55:43.894128   18405 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:55:43.894181   18405 cni.go:84] Creating CNI manager for ""
	I0729 16:55:43.894194   18405 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 16:55:43.894201   18405 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:55:43.894256   18405 start.go:340] cluster config:
	{Name:download-only-488789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-488789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:43.894445   18405 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:43.896293   18405 out.go:97] Downloading VM boot image ...
	I0729 16:55:43.896315   18405 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19345-11206/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 16:55:48.402964   18405 out.go:97] Starting "download-only-488789" primary control-plane node in "download-only-488789" cluster
	I0729 16:55:48.402988   18405 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 16:55:48.424084   18405 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 16:55:48.424114   18405 cache.go:56] Caching tarball of preloaded images
	I0729 16:55:48.424300   18405 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 16:55:48.425897   18405 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 16:55:48.425920   18405 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 16:55:48.454223   18405 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 16:55:53.060715   18405 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 16:55:53.060807   18405 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-488789 host does not exist
	  To start a cluster, run: "minikube start -p download-only-488789"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-488789
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (10.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-090253 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-090253 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.493183897s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (10.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-090253
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-090253: exit status 85 (54.520663ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-488789 | jenkins | v1.33.1 | 29 Jul 24 16:55 UTC |                     |
	|         | -p download-only-488789        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:55 UTC | 29 Jul 24 16:55 UTC |
	| delete  | -p download-only-488789        | download-only-488789 | jenkins | v1.33.1 | 29 Jul 24 16:55 UTC | 29 Jul 24 16:55 UTC |
	| start   | -o=json --download-only        | download-only-090253 | jenkins | v1.33.1 | 29 Jul 24 16:55 UTC |                     |
	|         | -p download-only-090253        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 16:55:54
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 16:55:54.916373   18625 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:54.916467   18625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:54.916478   18625 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:54.916482   18625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:54.916677   18625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 16:55:54.917216   18625 out.go:298] Setting JSON to true
	I0729 16:55:54.918032   18625 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2307,"bootTime":1722269848,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 16:55:54.918088   18625 start.go:139] virtualization: kvm guest
	I0729 16:55:54.920131   18625 out.go:97] [download-only-090253] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 16:55:54.920256   18625 notify.go:220] Checking for updates...
	I0729 16:55:54.921515   18625 out.go:169] MINIKUBE_LOCATION=19345
	I0729 16:55:54.922719   18625 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:55:54.923999   18625 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 16:55:54.925381   18625 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 16:55:54.926563   18625 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 16:55:54.928940   18625 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 16:55:54.929123   18625 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:55:54.959514   18625 out.go:97] Using the kvm2 driver based on user configuration
	I0729 16:55:54.959536   18625 start.go:297] selected driver: kvm2
	I0729 16:55:54.959542   18625 start.go:901] validating driver "kvm2" against <nil>
	I0729 16:55:54.959872   18625 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:54.959955   18625 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 16:55:54.974166   18625 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 16:55:54.974211   18625 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:55:54.974697   18625 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 16:55:54.974873   18625 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:55:54.974900   18625 cni.go:84] Creating CNI manager for ""
	I0729 16:55:54.974910   18625 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 16:55:54.974923   18625 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:55:54.974982   18625 start.go:340] cluster config:
	{Name:download-only-090253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-090253 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:54.975095   18625 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:54.976584   18625 out.go:97] Starting "download-only-090253" primary control-plane node in "download-only-090253" cluster
	I0729 16:55:54.976607   18625 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 16:55:55.003318   18625 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 16:55:55.003340   18625 cache.go:56] Caching tarball of preloaded images
	I0729 16:55:55.003458   18625 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 16:55:55.005054   18625 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 16:55:55.005066   18625 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0729 16:55:55.031903   18625 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 16:56:04.018846   18625 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0729 16:56:04.018947   18625 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0729 16:56:04.790325   18625 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 16:56:04.790708   18625 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/download-only-090253/config.json ...
	I0729 16:56:04.790741   18625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/download-only-090253/config.json: {Name:mkdd25629f29be45e4fc4fda5ce7ffcdee50b2bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:04.790915   18625 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 16:56:04.791098   18625 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-090253 host does not exist
	  To start a cluster, run: "minikube start -p download-only-090253"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-090253
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (5.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-254884 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-254884 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.55518046s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (5.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-254884
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-254884: exit status 85 (56.361638ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-488789 | jenkins | v1.33.1 | 29 Jul 24 16:55 UTC |                     |
	|         | -p download-only-488789             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:55 UTC | 29 Jul 24 16:55 UTC |
	| delete  | -p download-only-488789             | download-only-488789 | jenkins | v1.33.1 | 29 Jul 24 16:55 UTC | 29 Jul 24 16:55 UTC |
	| start   | -o=json --download-only             | download-only-090253 | jenkins | v1.33.1 | 29 Jul 24 16:55 UTC |                     |
	|         | -p download-only-090253             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:56 UTC | 29 Jul 24 16:56 UTC |
	| delete  | -p download-only-090253             | download-only-090253 | jenkins | v1.33.1 | 29 Jul 24 16:56 UTC | 29 Jul 24 16:56 UTC |
	| start   | -o=json --download-only             | download-only-254884 | jenkins | v1.33.1 | 29 Jul 24 16:56 UTC |                     |
	|         | -p download-only-254884             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 16:56:05
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 16:56:05.714935   18832 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:56:05.715157   18832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:56:05.715165   18832 out.go:304] Setting ErrFile to fd 2...
	I0729 16:56:05.715170   18832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:56:05.715327   18832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 16:56:05.715839   18832 out.go:298] Setting JSON to true
	I0729 16:56:05.716904   18832 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2318,"bootTime":1722269848,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 16:56:05.716963   18832 start.go:139] virtualization: kvm guest
	I0729 16:56:05.719167   18832 out.go:97] [download-only-254884] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 16:56:05.719291   18832 notify.go:220] Checking for updates...
	I0729 16:56:05.720571   18832 out.go:169] MINIKUBE_LOCATION=19345
	I0729 16:56:05.721804   18832 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:56:05.722922   18832 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 16:56:05.724221   18832 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 16:56:05.725454   18832 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 16:56:05.727664   18832 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 16:56:05.727872   18832 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:56:05.760408   18832 out.go:97] Using the kvm2 driver based on user configuration
	I0729 16:56:05.760441   18832 start.go:297] selected driver: kvm2
	I0729 16:56:05.760446   18832 start.go:901] validating driver "kvm2" against <nil>
	I0729 16:56:05.760766   18832 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:56:05.760859   18832 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19345-11206/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 16:56:05.775878   18832 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 16:56:05.775926   18832 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:56:05.776407   18832 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 16:56:05.776591   18832 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:56:05.776650   18832 cni.go:84] Creating CNI manager for ""
	I0729 16:56:05.776667   18832 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 16:56:05.776683   18832 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:56:05.776761   18832 start.go:340] cluster config:
	{Name:download-only-254884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-254884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:56:05.776862   18832 iso.go:125] acquiring lock: {Name:mke302f851ce8256f9b44dd080ed38df68285cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:56:05.778762   18832 out.go:97] Starting "download-only-254884" primary control-plane node in "download-only-254884" cluster
	I0729 16:56:05.778784   18832 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 16:56:05.802150   18832 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 16:56:05.802181   18832 cache.go:56] Caching tarball of preloaded images
	I0729 16:56:05.802573   18832 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 16:56:05.804402   18832 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 16:56:05.804426   18832 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 16:56:05.836729   18832 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 16:56:09.916859   18832 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 16:56:09.916958   18832 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19345-11206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 16:56:10.653795   18832 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 16:56:10.654149   18832 profile.go:143] Saving config to /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/download-only-254884/config.json ...
	I0729 16:56:10.654177   18832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/download-only-254884/config.json: {Name:mk38bbacb81a7b6f4e3b75fec0663e7e60aa8969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:56:10.654353   18832 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 16:56:10.654532   18832 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19345-11206/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-254884 host does not exist
	  To start a cluster, run: "minikube start -p download-only-254884"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-254884
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-601375 --alsologtostderr --binary-mirror http://127.0.0.1:35651 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-601375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-601375
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (122.82s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-345856 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-345856 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m1.909590976s)
helpers_test.go:175: Cleaning up "offline-crio-345856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-345856
--- PASS: TestOffline (122.82s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-433102
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-433102: exit status 85 (48.78043ms)

                                                
                                                
-- stdout --
	* Profile "addons-433102" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-433102"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-433102
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-433102: exit status 85 (47.694882ms)

                                                
                                                
-- stdout --
	* Profile "addons-433102" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-433102"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (136.84s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-433102 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-433102 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m16.837911567s)
--- PASS: TestAddons/Setup (136.84s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-433102 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-433102 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.616423ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-bz6n2" [61225496-6f2a-48fa-b4f8-eab75fc915ba] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004899993s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wnpcd" [5728a955-abcb-481c-8e81-300240983718] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005230459s
addons_test.go:342: (dbg) Run:  kubectl --context addons-433102 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-433102 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-433102 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.807367854s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-433102 ip
2024/07/29 16:59:12 [DEBUG] GET http://192.168.39.73:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-433102 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.59s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lzt9b" [8c967897-28f8-457c-927a-1830b09f34b1] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005544126s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-433102
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-433102: (5.872834569s)
--- PASS: TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.87s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.204584ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-dvkm9" [8c867f82-b890-4ac8-aa2d-74386a1f3bdb] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.006762592s
addons_test.go:475: (dbg) Run:  kubectl --context addons-433102 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-433102 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.851668431s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-433102 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-linux-amd64 -p addons-433102 addons disable helm-tiller --alsologtostderr -v=1: (1.011403246s)
--- PASS: TestAddons/parallel/HelmTiller (10.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.901439ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-433102 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-433102 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [91758244-be5e-44e5-862a-aae38961abfc] Pending
helpers_test.go:344: "task-pv-pod" [91758244-be5e-44e5-862a-aae38961abfc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [91758244-be5e-44e5-862a-aae38961abfc] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003368172s
addons_test.go:590: (dbg) Run:  kubectl --context addons-433102 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-433102 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-433102 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-433102 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-433102 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-433102 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-433102 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [77b0fc76-2881-46e6-bfa7-ece6e8ea5061] Pending
helpers_test.go:344: "task-pv-pod-restore" [77b0fc76-2881-46e6-bfa7-ece6e8ea5061] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [77b0fc76-2881-46e6-bfa7-ece6e8ea5061] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004625064s
addons_test.go:632: (dbg) Run:  kubectl --context addons-433102 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-433102 delete pod task-pv-pod-restore: (1.693843178s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-433102 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-433102 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-433102 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-433102 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.79155959s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-433102 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-433102 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-gg8r9" [bb809283-0294-420c-8d5a-7cd999443c2b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-gg8r9" [bb809283-0294-420c-8d5a-7cd999443c2b] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00410544s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-433102 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-433102 addons disable headlamp --alsologtostderr -v=1: (5.688419936s)
--- PASS: TestAddons/parallel/Headlamp (17.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-rlxs5" [4b7bc91a-cfa5-42aa-a2b3-d6aa6f0a8e93] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004587562s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-433102
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.05s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-433102 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-433102 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433102 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b235038a-a2a9-4aaa-a426-f726ac55fe9a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b235038a-a2a9-4aaa-a426-f726ac55fe9a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b235038a-a2a9-4aaa-a426-f726ac55fe9a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.003361646s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-433102 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-433102 ssh "cat /opt/local-path-provisioner/pvc-b5b14fe5-d708-427a-a913-c11d781bebaf_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-433102 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-433102 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-433102 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-433102 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.260185588s)
--- PASS: TestAddons/parallel/LocalPath (56.05s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-w9bhg" [56c0414f-7d09-4189-9d58-7fc65a0d5eb8] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005212581s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-433102
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-jjkk9" [79293be3-4521-4ba8-a968-5bddbcee2e37] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003928043s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-433102 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-433102 addons disable yakd --alsologtostderr -v=1: (5.862743217s)
--- PASS: TestAddons/parallel/Yakd (10.87s)

                                                
                                    
x
+
TestCertOptions (90.38s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-233394 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-233394 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m28.904838074s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-233394 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-233394 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-233394 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-233394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-233394
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-233394: (1.029887539s)
--- PASS: TestCertOptions (90.38s)

                                                
                                    
x
+
TestCertExpiration (310.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-548627 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-548627 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m17.309791034s)
E0729 18:11:52.902967   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-548627 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-548627 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (52.269069758s)
helpers_test.go:175: Cleaning up "cert-expiration-548627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-548627
--- PASS: TestCertExpiration (310.39s)

                                                
                                    
x
+
TestForceSystemdFlag (52.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-019533 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-019533 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (51.711051515s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-019533 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-019533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-019533
--- PASS: TestForceSystemdFlag (52.90s)

                                                
                                    
x
+
TestForceSystemdEnv (47.32s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-900095 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-900095 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.32393546s)
helpers_test.go:175: Cleaning up "force-systemd-env-900095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-900095
--- PASS: TestForceSystemdEnv (47.32s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.23s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.23s)

                                                
                                    
x
+
TestErrorSpam/setup (42.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-907056 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-907056 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-907056 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-907056 --driver=kvm2  --container-runtime=crio: (42.920301675s)
--- PASS: TestErrorSpam/setup (42.92s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 unpause
--- PASS: TestErrorSpam/unpause (1.53s)

                                                
                                    
x
+
TestErrorSpam/stop (4.69s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 stop: (1.601150719s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 stop: (1.110994119s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-907056 --log_dir /tmp/nospam-907056 stop: (1.980079245s)
--- PASS: TestErrorSpam/stop (4.69s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19345-11206/.minikube/files/etc/test/nested/copy/18393/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (96.9s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419822 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0729 17:08:29.676797   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:08:29.682551   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:08:29.692784   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:08:29.713024   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:08:29.753280   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:08:29.833612   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:08:29.994063   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:08:30.314631   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:08:30.955563   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:08:32.236058   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:08:34.796290   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:08:39.917082   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:08:50.158095   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:09:10.638514   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:09:51.598685   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-419822 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m36.899117988s)
--- PASS: TestFunctional/serial/StartWithProxy (96.90s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419822 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-419822 --alsologtostderr -v=8: (43.200810601s)
functional_test.go:659: soft start took 43.201472045s for "functional-419822" cluster.
--- PASS: TestFunctional/serial/SoftStart (43.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-419822 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-419822 cache add registry.k8s.io/pause:3.3: (1.07434227s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-419822 cache add registry.k8s.io/pause:latest: (1.013103235s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-419822 /tmp/TestFunctionalserialCacheCmdcacheadd_local723820861/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 cache add minikube-local-cache-test:functional-419822
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 cache delete minikube-local-cache-test:functional-419822
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-419822
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419822 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (209.535974ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 kubectl -- --context functional-419822 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-419822 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (51.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419822 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0729 17:11:13.519236   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-419822 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (51.128584058s)
functional_test.go:757: restart took 51.128700822s for "functional-419822" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (51.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-419822 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-419822 logs: (1.367328317s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 logs --file /tmp/TestFunctionalserialLogsFileCmd837057618/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-419822 logs --file /tmp/TestFunctionalserialLogsFileCmd837057618/001/logs.txt: (1.482499606s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.35s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-419822 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-419822
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-419822: exit status 115 (263.749082ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.26:32548 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-419822 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419822 config get cpus: exit status 14 (63.197844ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419822 config get cpus: exit status 14 (42.578978ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-419822 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-419822 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28280: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419822 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-419822 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (122.945556ms)

                                                
                                                
-- stdout --
	* [functional-419822] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19345
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:12:10.885745   27880 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:12:10.885996   27880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:12:10.886005   27880 out.go:304] Setting ErrFile to fd 2...
	I0729 17:12:10.886010   27880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:12:10.886185   27880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:12:10.886718   27880 out.go:298] Setting JSON to false
	I0729 17:12:10.887571   27880 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3283,"bootTime":1722269848,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:12:10.887619   27880 start.go:139] virtualization: kvm guest
	I0729 17:12:10.889800   27880 out.go:177] * [functional-419822] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 17:12:10.891105   27880 notify.go:220] Checking for updates...
	I0729 17:12:10.891124   27880 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 17:12:10.892370   27880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:12:10.893823   27880 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 17:12:10.895080   27880 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:12:10.896262   27880 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 17:12:10.897325   27880 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:12:10.898694   27880 config.go:182] Loaded profile config "functional-419822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:12:10.899103   27880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:12:10.899170   27880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:12:10.913742   27880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44401
	I0729 17:12:10.914191   27880 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:12:10.914713   27880 main.go:141] libmachine: Using API Version  1
	I0729 17:12:10.914731   27880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:12:10.915063   27880 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:12:10.915249   27880 main.go:141] libmachine: (functional-419822) Calling .DriverName
	I0729 17:12:10.915507   27880 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:12:10.915802   27880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:12:10.915842   27880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:12:10.929622   27880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I0729 17:12:10.930022   27880 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:12:10.930504   27880 main.go:141] libmachine: Using API Version  1
	I0729 17:12:10.930524   27880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:12:10.930806   27880 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:12:10.930984   27880 main.go:141] libmachine: (functional-419822) Calling .DriverName
	I0729 17:12:10.961899   27880 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 17:12:10.963096   27880 start.go:297] selected driver: kvm2
	I0729 17:12:10.963111   27880 start.go:901] validating driver "kvm2" against &{Name:functional-419822 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-419822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:12:10.963232   27880 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:12:10.965431   27880 out.go:177] 
	W0729 17:12:10.966490   27880 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 17:12:10.967629   27880 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419822 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419822 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-419822 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (147.599422ms)

                                                
                                                
-- stdout --
	* [functional-419822] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19345
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:11:53.060760   26396 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:11:53.060868   26396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:11:53.060876   26396 out.go:304] Setting ErrFile to fd 2...
	I0729 17:11:53.060882   26396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:11:53.061123   26396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:11:53.061634   26396 out.go:298] Setting JSON to false
	I0729 17:11:53.062561   26396 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3265,"bootTime":1722269848,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:11:53.062615   26396 start.go:139] virtualization: kvm guest
	I0729 17:11:53.064723   26396 out.go:177] * [functional-419822] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0729 17:11:53.066074   26396 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 17:11:53.066129   26396 notify.go:220] Checking for updates...
	I0729 17:11:53.068444   26396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:11:53.069629   26396 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 17:11:53.070755   26396 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 17:11:53.071988   26396 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 17:11:53.073093   26396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:11:53.074642   26396 config.go:182] Loaded profile config "functional-419822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:11:53.075079   26396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:11:53.075124   26396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:11:53.095325   26396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38353
	I0729 17:11:53.095735   26396 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:11:53.096513   26396 main.go:141] libmachine: Using API Version  1
	I0729 17:11:53.096529   26396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:11:53.096860   26396 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:11:53.097543   26396 main.go:141] libmachine: (functional-419822) Calling .DriverName
	I0729 17:11:53.097959   26396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:11:53.098263   26396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:11:53.098415   26396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:11:53.115742   26396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I0729 17:11:53.116133   26396 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:11:53.116632   26396 main.go:141] libmachine: Using API Version  1
	I0729 17:11:53.116658   26396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:11:53.117022   26396 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:11:53.117225   26396 main.go:141] libmachine: (functional-419822) Calling .DriverName
	I0729 17:11:53.153687   26396 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0729 17:11:53.154830   26396 start.go:297] selected driver: kvm2
	I0729 17:11:53.154843   26396 start.go:901] validating driver "kvm2" against &{Name:functional-419822 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-419822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:11:53.154980   26396 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:11:53.157029   26396 out.go:177] 
	W0729 17:11:53.158053   26396 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 17:11:53.159206   26396 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-419822 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-419822 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-pwfln" [fc05005e-53b2-4b7e-892d-a3725ec31985] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-pwfln" [fc05005e-53b2-4b7e-892d-a3725ec31985] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.332562344s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.26:32703
functional_test.go:1671: http://192.168.39.26:32703: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-pwfln

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.26:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.26:32703
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.84s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c51197f4-b45c-4aed-8e3b-88df5004f438] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004002756s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-419822 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-419822 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-419822 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-419822 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-419822 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [357a5ee4-d1e1-4b02-ae5b-6aafd12ef387] Pending
helpers_test.go:344: "sp-pod" [357a5ee4-d1e1-4b02-ae5b-6aafd12ef387] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [357a5ee4-d1e1-4b02-ae5b-6aafd12ef387] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004680817s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-419822 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-419822 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-419822 delete -f testdata/storage-provisioner/pod.yaml: (3.164040427s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-419822 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e51bdc7b-fc0b-49b5-bd78-b1af7db38271] Pending
helpers_test.go:344: "sp-pod" [e51bdc7b-fc0b-49b5-bd78-b1af7db38271] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e51bdc7b-fc0b-49b5-bd78-b1af7db38271] Running
2024/07/29 17:12:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.006934745s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-419822 exec sp-pod -- ls /tmp/mount
E0729 17:13:29.677043   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:13:57.359550   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.40s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh -n functional-419822 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 cp functional-419822:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3835767513/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh -n functional-419822 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh -n functional-419822 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-419822 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-bgrvt" [0a69b7e9-6fce-4237-9c39-5a3eb6195710] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-bgrvt" [0a69b7e9-6fce-4237-9c39-5a3eb6195710] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.005208479s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-419822 exec mysql-64454c8b5c-bgrvt -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-419822 exec mysql-64454c8b5c-bgrvt -- mysql -ppassword -e "show databases;": exit status 1 (385.201149ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-419822 exec mysql-64454c8b5c-bgrvt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.22s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/18393/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "sudo cat /etc/test/nested/copy/18393/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/18393.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "sudo cat /etc/ssl/certs/18393.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/18393.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "sudo cat /usr/share/ca-certificates/18393.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/183932.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "sudo cat /etc/ssl/certs/183932.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/183932.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "sudo cat /usr/share/ca-certificates/183932.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-419822 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419822 ssh "sudo systemctl is-active docker": exit status 1 (269.647216ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419822 ssh "sudo systemctl is-active containerd": exit status 1 (287.389614ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-419822 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-419822 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-zqd4h" [60b1a9d7-750d-433d-9e41-0735b51a08d0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-zqd4h" [60b1a9d7-750d-433d-9e41-0735b51a08d0] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003868396s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "324.074076ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "46.288329ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "238.637287ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "43.79313ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-419822 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-419822
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-419822
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-419822 image ls --format short --alsologtostderr:
I0729 17:12:19.915121   28234 out.go:291] Setting OutFile to fd 1 ...
I0729 17:12:19.915360   28234 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:12:19.915369   28234 out.go:304] Setting ErrFile to fd 2...
I0729 17:12:19.915373   28234 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:12:19.915591   28234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
I0729 17:12:19.916171   28234 config.go:182] Loaded profile config "functional-419822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:12:19.916281   28234 config.go:182] Loaded profile config "functional-419822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:12:19.916620   28234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:12:19.916662   28234 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:12:19.931545   28234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
I0729 17:12:19.932057   28234 main.go:141] libmachine: () Calling .GetVersion
I0729 17:12:19.932611   28234 main.go:141] libmachine: Using API Version  1
I0729 17:12:19.932636   28234 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:12:19.933030   28234 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:12:19.933198   28234 main.go:141] libmachine: (functional-419822) Calling .GetState
I0729 17:12:19.934824   28234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:12:19.934857   28234 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:12:19.949646   28234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40497
I0729 17:12:19.950078   28234 main.go:141] libmachine: () Calling .GetVersion
I0729 17:12:19.950586   28234 main.go:141] libmachine: Using API Version  1
I0729 17:12:19.950606   28234 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:12:19.950909   28234 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:12:19.951067   28234 main.go:141] libmachine: (functional-419822) Calling .DriverName
I0729 17:12:19.951255   28234 ssh_runner.go:195] Run: systemctl --version
I0729 17:12:19.951279   28234 main.go:141] libmachine: (functional-419822) Calling .GetSSHHostname
I0729 17:12:19.953854   28234 main.go:141] libmachine: (functional-419822) DBG | domain functional-419822 has defined MAC address 52:54:00:af:4e:d9 in network mk-functional-419822
I0729 17:12:19.954240   28234 main.go:141] libmachine: (functional-419822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:4e:d9", ip: ""} in network mk-functional-419822: {Iface:virbr1 ExpiryTime:2024-07-29 18:08:42 +0000 UTC Type:0 Mac:52:54:00:af:4e:d9 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:functional-419822 Clientid:01:52:54:00:af:4e:d9}
I0729 17:12:19.954270   28234 main.go:141] libmachine: (functional-419822) DBG | domain functional-419822 has defined IP address 192.168.39.26 and MAC address 52:54:00:af:4e:d9 in network mk-functional-419822
I0729 17:12:19.954446   28234 main.go:141] libmachine: (functional-419822) Calling .GetSSHPort
I0729 17:12:19.954604   28234 main.go:141] libmachine: (functional-419822) Calling .GetSSHKeyPath
I0729 17:12:19.954723   28234 main.go:141] libmachine: (functional-419822) Calling .GetSSHUsername
I0729 17:12:19.954828   28234 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/functional-419822/id_rsa Username:docker}
I0729 17:12:20.057232   28234 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 17:12:20.121966   28234 main.go:141] libmachine: Making call to close driver server
I0729 17:12:20.121981   28234 main.go:141] libmachine: (functional-419822) Calling .Close
I0729 17:12:20.122286   28234 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:12:20.122296   28234 main.go:141] libmachine: (functional-419822) DBG | Closing plugin on server side
I0729 17:12:20.122305   28234 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 17:12:20.122313   28234 main.go:141] libmachine: Making call to close driver server
I0729 17:12:20.122321   28234 main.go:141] libmachine: (functional-419822) Calling .Close
I0729 17:12:20.122535   28234 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:12:20.122550   28234 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 17:12:20.122579   28234 main.go:141] libmachine: (functional-419822) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-419822 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| localhost/my-image                      | functional-419822  | 1229b6e9cd7cb | 1.47MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kicbase/echo-server           | functional-419822  | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| localhost/minikube-local-cache-test     | functional-419822  | 500b8eb2ee1ce | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-419822 image ls --format table --alsologtostderr:
I0729 17:12:23.408273   28390 out.go:291] Setting OutFile to fd 1 ...
I0729 17:12:23.408377   28390 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:12:23.408382   28390 out.go:304] Setting ErrFile to fd 2...
I0729 17:12:23.408387   28390 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:12:23.408605   28390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
I0729 17:12:23.409245   28390 config.go:182] Loaded profile config "functional-419822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:12:23.409356   28390 config.go:182] Loaded profile config "functional-419822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:12:23.410129   28390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:12:23.410216   28390 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:12:23.428366   28390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44485
I0729 17:12:23.428733   28390 main.go:141] libmachine: () Calling .GetVersion
I0729 17:12:23.429342   28390 main.go:141] libmachine: Using API Version  1
I0729 17:12:23.429380   28390 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:12:23.429699   28390 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:12:23.429883   28390 main.go:141] libmachine: (functional-419822) Calling .GetState
I0729 17:12:23.431528   28390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:12:23.431570   28390 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:12:23.446952   28390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36503
I0729 17:12:23.447270   28390 main.go:141] libmachine: () Calling .GetVersion
I0729 17:12:23.447674   28390 main.go:141] libmachine: Using API Version  1
I0729 17:12:23.447704   28390 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:12:23.448037   28390 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:12:23.448271   28390 main.go:141] libmachine: (functional-419822) Calling .DriverName
I0729 17:12:23.448456   28390 ssh_runner.go:195] Run: systemctl --version
I0729 17:12:23.448490   28390 main.go:141] libmachine: (functional-419822) Calling .GetSSHHostname
I0729 17:12:23.450977   28390 main.go:141] libmachine: (functional-419822) DBG | domain functional-419822 has defined MAC address 52:54:00:af:4e:d9 in network mk-functional-419822
I0729 17:12:23.451360   28390 main.go:141] libmachine: (functional-419822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:4e:d9", ip: ""} in network mk-functional-419822: {Iface:virbr1 ExpiryTime:2024-07-29 18:08:42 +0000 UTC Type:0 Mac:52:54:00:af:4e:d9 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:functional-419822 Clientid:01:52:54:00:af:4e:d9}
I0729 17:12:23.451387   28390 main.go:141] libmachine: (functional-419822) DBG | domain functional-419822 has defined IP address 192.168.39.26 and MAC address 52:54:00:af:4e:d9 in network mk-functional-419822
I0729 17:12:23.451547   28390 main.go:141] libmachine: (functional-419822) Calling .GetSSHPort
I0729 17:12:23.451710   28390 main.go:141] libmachine: (functional-419822) Calling .GetSSHKeyPath
I0729 17:12:23.451869   28390 main.go:141] libmachine: (functional-419822) Calling .GetSSHUsername
I0729 17:12:23.451994   28390 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/functional-419822/id_rsa Username:docker}
I0729 17:12:23.537023   28390 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 17:12:23.585299   28390 main.go:141] libmachine: Making call to close driver server
I0729 17:12:23.585318   28390 main.go:141] libmachine: (functional-419822) Calling .Close
I0729 17:12:23.585628   28390 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:12:23.585644   28390 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 17:12:23.585660   28390 main.go:141] libmachine: Making call to close driver server
I0729 17:12:23.585668   28390 main.go:141] libmachine: (functional-419822) Calling .Close
I0729 17:12:23.585683   28390 main.go:141] libmachine: (functional-419822) DBG | Closing plugin on server side
I0729 17:12:23.585951   28390 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:12:23.585980   28390 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-419822 image ls --format json --alsologtostderr:
[{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"500b8eb2ee1ce5ee137a3f3e8240fd79f265118c01f2ce0412b506f49028b485","repoDi
gests":["localhost/minikube-local-cache-test@sha256:ac7c210a755756f8913959dfd2d81a317f3809d9813a317cfabf4e42160a868f"],"repoTags":["localhost/minikube-local-cache-test:functional-419822"],"size":"3330"},{"id":"1229b6e9cd7cbe66fb3848e7271339e57e446cec85a64f919f660eb0a9b0469a","repoDigests":["localhost/my-image@sha256:bbea20d44061b2834816a7af91c8ac1eeac38205c9a161f5b2bdbf9138f46532"],"repoTags":["localhost/my-image:functional-419822"],"size":"1468599"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":
["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysq
l@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af
740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-419822"],"size":"4943877"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr
.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s
.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"b4d40e32280d5cfc400509a04b495d9d86e2214e742ab95bbd053d38fa187a25","repoDigests":["docker.io/library/bd660f678062af9a4405835cb0f25f39c321b31765bc08a8d0eff4b4311ee236-tmp@sha256:0e0dcc729c6629f7f7976369fce23a8e8af3d0bb003a3d49ba940f1dadf30a18"],"repoTags":[],"size":"1466018"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-419822 image ls --format json --alsologtostderr:
I0729 17:12:23.164233   28367 out.go:291] Setting OutFile to fd 1 ...
I0729 17:12:23.164332   28367 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:12:23.164340   28367 out.go:304] Setting ErrFile to fd 2...
I0729 17:12:23.164348   28367 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:12:23.164533   28367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
I0729 17:12:23.165097   28367 config.go:182] Loaded profile config "functional-419822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:12:23.165225   28367 config.go:182] Loaded profile config "functional-419822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:12:23.165613   28367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:12:23.165664   28367 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:12:23.180328   28367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
I0729 17:12:23.180721   28367 main.go:141] libmachine: () Calling .GetVersion
I0729 17:12:23.181228   28367 main.go:141] libmachine: Using API Version  1
I0729 17:12:23.181249   28367 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:12:23.181589   28367 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:12:23.181777   28367 main.go:141] libmachine: (functional-419822) Calling .GetState
I0729 17:12:23.183496   28367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:12:23.183529   28367 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:12:23.198166   28367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42987
I0729 17:12:23.198653   28367 main.go:141] libmachine: () Calling .GetVersion
I0729 17:12:23.199110   28367 main.go:141] libmachine: Using API Version  1
I0729 17:12:23.199128   28367 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:12:23.199498   28367 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:12:23.199676   28367 main.go:141] libmachine: (functional-419822) Calling .DriverName
I0729 17:12:23.199919   28367 ssh_runner.go:195] Run: systemctl --version
I0729 17:12:23.199956   28367 main.go:141] libmachine: (functional-419822) Calling .GetSSHHostname
I0729 17:12:23.202470   28367 main.go:141] libmachine: (functional-419822) DBG | domain functional-419822 has defined MAC address 52:54:00:af:4e:d9 in network mk-functional-419822
I0729 17:12:23.202850   28367 main.go:141] libmachine: (functional-419822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:4e:d9", ip: ""} in network mk-functional-419822: {Iface:virbr1 ExpiryTime:2024-07-29 18:08:42 +0000 UTC Type:0 Mac:52:54:00:af:4e:d9 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:functional-419822 Clientid:01:52:54:00:af:4e:d9}
I0729 17:12:23.202878   28367 main.go:141] libmachine: (functional-419822) DBG | domain functional-419822 has defined IP address 192.168.39.26 and MAC address 52:54:00:af:4e:d9 in network mk-functional-419822
I0729 17:12:23.202992   28367 main.go:141] libmachine: (functional-419822) Calling .GetSSHPort
I0729 17:12:23.203180   28367 main.go:141] libmachine: (functional-419822) Calling .GetSSHKeyPath
I0729 17:12:23.203312   28367 main.go:141] libmachine: (functional-419822) Calling .GetSSHUsername
I0729 17:12:23.203454   28367 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/functional-419822/id_rsa Username:docker}
I0729 17:12:23.309843   28367 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 17:12:23.356430   28367 main.go:141] libmachine: Making call to close driver server
I0729 17:12:23.356442   28367 main.go:141] libmachine: (functional-419822) Calling .Close
I0729 17:12:23.356728   28367 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:12:23.356747   28367 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 17:12:23.356766   28367 main.go:141] libmachine: Making call to close driver server
I0729 17:12:23.356774   28367 main.go:141] libmachine: (functional-419822) Calling .Close
I0729 17:12:23.357014   28367 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:12:23.357029   28367 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-419822 image ls --format yaml --alsologtostderr:
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-419822
size: "4943877"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 500b8eb2ee1ce5ee137a3f3e8240fd79f265118c01f2ce0412b506f49028b485
repoDigests:
- localhost/minikube-local-cache-test@sha256:ac7c210a755756f8913959dfd2d81a317f3809d9813a317cfabf4e42160a868f
repoTags:
- localhost/minikube-local-cache-test:functional-419822
size: "3330"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-419822 image ls --format yaml --alsologtostderr:
I0729 17:12:20.167242   28257 out.go:291] Setting OutFile to fd 1 ...
I0729 17:12:20.167345   28257 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:12:20.167354   28257 out.go:304] Setting ErrFile to fd 2...
I0729 17:12:20.167358   28257 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:12:20.167546   28257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
I0729 17:12:20.168072   28257 config.go:182] Loaded profile config "functional-419822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:12:20.168170   28257 config.go:182] Loaded profile config "functional-419822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:12:20.168584   28257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:12:20.168627   28257 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:12:20.183619   28257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34965
I0729 17:12:20.184073   28257 main.go:141] libmachine: () Calling .GetVersion
I0729 17:12:20.184635   28257 main.go:141] libmachine: Using API Version  1
I0729 17:12:20.184664   28257 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:12:20.184964   28257 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:12:20.185139   28257 main.go:141] libmachine: (functional-419822) Calling .GetState
I0729 17:12:20.187253   28257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:12:20.187292   28257 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:12:20.203247   28257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34129
I0729 17:12:20.203605   28257 main.go:141] libmachine: () Calling .GetVersion
I0729 17:12:20.204142   28257 main.go:141] libmachine: Using API Version  1
I0729 17:12:20.204169   28257 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:12:20.204516   28257 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:12:20.204709   28257 main.go:141] libmachine: (functional-419822) Calling .DriverName
I0729 17:12:20.204908   28257 ssh_runner.go:195] Run: systemctl --version
I0729 17:12:20.204937   28257 main.go:141] libmachine: (functional-419822) Calling .GetSSHHostname
I0729 17:12:20.207931   28257 main.go:141] libmachine: (functional-419822) DBG | domain functional-419822 has defined MAC address 52:54:00:af:4e:d9 in network mk-functional-419822
I0729 17:12:20.208330   28257 main.go:141] libmachine: (functional-419822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:4e:d9", ip: ""} in network mk-functional-419822: {Iface:virbr1 ExpiryTime:2024-07-29 18:08:42 +0000 UTC Type:0 Mac:52:54:00:af:4e:d9 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:functional-419822 Clientid:01:52:54:00:af:4e:d9}
I0729 17:12:20.208368   28257 main.go:141] libmachine: (functional-419822) DBG | domain functional-419822 has defined IP address 192.168.39.26 and MAC address 52:54:00:af:4e:d9 in network mk-functional-419822
I0729 17:12:20.208493   28257 main.go:141] libmachine: (functional-419822) Calling .GetSSHPort
I0729 17:12:20.208644   28257 main.go:141] libmachine: (functional-419822) Calling .GetSSHKeyPath
I0729 17:12:20.208831   28257 main.go:141] libmachine: (functional-419822) Calling .GetSSHUsername
I0729 17:12:20.208967   28257 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/functional-419822/id_rsa Username:docker}
I0729 17:12:20.294052   28257 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 17:12:20.338723   28257 main.go:141] libmachine: Making call to close driver server
I0729 17:12:20.338738   28257 main.go:141] libmachine: (functional-419822) Calling .Close
I0729 17:12:20.338985   28257 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:12:20.339001   28257 main.go:141] libmachine: (functional-419822) DBG | Closing plugin on server side
I0729 17:12:20.339002   28257 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 17:12:20.339026   28257 main.go:141] libmachine: Making call to close driver server
I0729 17:12:20.339034   28257 main.go:141] libmachine: (functional-419822) Calling .Close
I0729 17:12:20.339260   28257 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:12:20.339276   28257 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 17:12:20.339293   28257 main.go:141] libmachine: (functional-419822) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419822 ssh pgrep buildkitd: exit status 1 (184.425922ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image build -t localhost/my-image:functional-419822 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-419822 image build -t localhost/my-image:functional-419822 testdata/build --alsologtostderr: (2.042187119s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-419822 image build -t localhost/my-image:functional-419822 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b4d40e32280
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-419822
--> 1229b6e9cd7
Successfully tagged localhost/my-image:functional-419822
1229b6e9cd7cbe66fb3848e7271339e57e446cec85a64f919f660eb0a9b0469a
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-419822 image build -t localhost/my-image:functional-419822 testdata/build --alsologtostderr:
I0729 17:12:20.566459   28320 out.go:291] Setting OutFile to fd 1 ...
I0729 17:12:20.566631   28320 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:12:20.566644   28320 out.go:304] Setting ErrFile to fd 2...
I0729 17:12:20.566649   28320 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:12:20.566801   28320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
I0729 17:12:20.567322   28320 config.go:182] Loaded profile config "functional-419822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:12:20.567803   28320 config.go:182] Loaded profile config "functional-419822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:12:20.568229   28320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:12:20.568279   28320 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:12:20.587317   28320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37911
I0729 17:12:20.587809   28320 main.go:141] libmachine: () Calling .GetVersion
I0729 17:12:20.588304   28320 main.go:141] libmachine: Using API Version  1
I0729 17:12:20.588323   28320 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:12:20.588677   28320 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:12:20.588890   28320 main.go:141] libmachine: (functional-419822) Calling .GetState
I0729 17:12:20.590704   28320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:12:20.590749   28320 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:12:20.604714   28320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46123
I0729 17:12:20.605056   28320 main.go:141] libmachine: () Calling .GetVersion
I0729 17:12:20.605508   28320 main.go:141] libmachine: Using API Version  1
I0729 17:12:20.605531   28320 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:12:20.605833   28320 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:12:20.606041   28320 main.go:141] libmachine: (functional-419822) Calling .DriverName
I0729 17:12:20.606242   28320 ssh_runner.go:195] Run: systemctl --version
I0729 17:12:20.606262   28320 main.go:141] libmachine: (functional-419822) Calling .GetSSHHostname
I0729 17:12:20.609021   28320 main.go:141] libmachine: (functional-419822) DBG | domain functional-419822 has defined MAC address 52:54:00:af:4e:d9 in network mk-functional-419822
I0729 17:12:20.609406   28320 main.go:141] libmachine: (functional-419822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:4e:d9", ip: ""} in network mk-functional-419822: {Iface:virbr1 ExpiryTime:2024-07-29 18:08:42 +0000 UTC Type:0 Mac:52:54:00:af:4e:d9 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:functional-419822 Clientid:01:52:54:00:af:4e:d9}
I0729 17:12:20.609445   28320 main.go:141] libmachine: (functional-419822) DBG | domain functional-419822 has defined IP address 192.168.39.26 and MAC address 52:54:00:af:4e:d9 in network mk-functional-419822
I0729 17:12:20.609596   28320 main.go:141] libmachine: (functional-419822) Calling .GetSSHPort
I0729 17:12:20.609762   28320 main.go:141] libmachine: (functional-419822) Calling .GetSSHKeyPath
I0729 17:12:20.609919   28320 main.go:141] libmachine: (functional-419822) Calling .GetSSHUsername
I0729 17:12:20.610049   28320 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/functional-419822/id_rsa Username:docker}
I0729 17:12:20.693212   28320 build_images.go:161] Building image from path: /tmp/build.3728713503.tar
I0729 17:12:20.693275   28320 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 17:12:20.704246   28320 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3728713503.tar
I0729 17:12:20.708415   28320 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3728713503.tar: stat -c "%s %y" /var/lib/minikube/build/build.3728713503.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3728713503.tar': No such file or directory
I0729 17:12:20.708452   28320 ssh_runner.go:362] scp /tmp/build.3728713503.tar --> /var/lib/minikube/build/build.3728713503.tar (3072 bytes)
I0729 17:12:20.734931   28320 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3728713503
I0729 17:12:20.744855   28320 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3728713503 -xf /var/lib/minikube/build/build.3728713503.tar
I0729 17:12:20.754596   28320 crio.go:315] Building image: /var/lib/minikube/build/build.3728713503
I0729 17:12:20.754668   28320 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-419822 /var/lib/minikube/build/build.3728713503 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0729 17:12:22.527876   28320 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-419822 /var/lib/minikube/build/build.3728713503 --cgroup-manager=cgroupfs: (1.773175085s)
I0729 17:12:22.527963   28320 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3728713503
I0729 17:12:22.553726   28320 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3728713503.tar
I0729 17:12:22.564220   28320 build_images.go:217] Built localhost/my-image:functional-419822 from /tmp/build.3728713503.tar
I0729 17:12:22.564254   28320 build_images.go:133] succeeded building to: functional-419822
I0729 17:12:22.564260   28320 build_images.go:134] failed building to: 
I0729 17:12:22.564285   28320 main.go:141] libmachine: Making call to close driver server
I0729 17:12:22.564303   28320 main.go:141] libmachine: (functional-419822) Calling .Close
I0729 17:12:22.564583   28320 main.go:141] libmachine: (functional-419822) DBG | Closing plugin on server side
I0729 17:12:22.564608   28320 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:12:22.564623   28320 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 17:12:22.564640   28320 main.go:141] libmachine: Making call to close driver server
I0729 17:12:22.564652   28320 main.go:141] libmachine: (functional-419822) Calling .Close
I0729 17:12:22.564859   28320 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:12:22.564880   28320 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-419822
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image load --daemon docker.io/kicbase/echo-server:functional-419822 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-419822 image load --daemon docker.io/kicbase/echo-server:functional-419822 --alsologtostderr: (2.299847257s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image load --daemon docker.io/kicbase/echo-server:functional-419822 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-419822
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image load --daemon docker.io/kicbase/echo-server:functional-419822 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image save docker.io/kicbase/echo-server:functional-419822 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-419822 image save docker.io/kicbase/echo-server:functional-419822 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.831214305s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 service list -o json
functional_test.go:1490: Took "292.18692ms" to run "out/minikube-linux-amd64 -p functional-419822 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.26:31768
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.26:31768
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image rm docker.io/kicbase/echo-server:functional-419822 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-419822
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 image save --daemon docker.io/kicbase/echo-server:functional-419822 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-419822
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-419822 /tmp/TestFunctionalparallelMountCmdspecific-port2387747470/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419822 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (181.546081ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419822 /tmp/TestFunctionalparallelMountCmdspecific-port2387747470/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419822 ssh "sudo umount -f /mount-9p": exit status 1 (185.001944ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-419822 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419822 /tmp/TestFunctionalparallelMountCmdspecific-port2387747470/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-419822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup732295623/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-419822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup732295623/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-419822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup732295623/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419822 ssh "findmnt -T" /mount1: exit status 1 (221.557476ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-419822 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-419822 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup732295623/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup732295623/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup732295623/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.19s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-419822
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-419822
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-419822
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (208.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-900414 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 17:16:52.902524   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:16:52.907867   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:16:52.918147   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:16:52.938468   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:16:52.978985   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:16:53.059358   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:16:53.219821   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:16:53.541000   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:16:54.182141   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:16:55.462939   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:16:58.023251   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:17:03.144038   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:17:13.384661   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:17:33.865753   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:18:14.825929   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:18:29.676980   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-900414 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m28.277081709s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (208.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-900414 -- rollout status deployment/busybox: (2.244230264s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- exec busybox-fc5497c4f-4fv4t -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- exec busybox-fc5497c4f-dqz55 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- exec busybox-fc5497c4f-s9sz8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- exec busybox-fc5497c4f-4fv4t -- nslookup kubernetes.default
ha_test.go:181: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-900414 -- exec busybox-fc5497c4f-4fv4t -- nslookup kubernetes.default: (1.929621693s)
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- exec busybox-fc5497c4f-dqz55 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- exec busybox-fc5497c4f-s9sz8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- exec busybox-fc5497c4f-4fv4t -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- exec busybox-fc5497c4f-dqz55 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- exec busybox-fc5497c4f-s9sz8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- exec busybox-fc5497c4f-4fv4t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- exec busybox-fc5497c4f-4fv4t -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- exec busybox-fc5497c4f-dqz55 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- exec busybox-fc5497c4f-dqz55 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- exec busybox-fc5497c4f-s9sz8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-900414 -- exec busybox-fc5497c4f-s9sz8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-900414 -v=7 --alsologtostderr
E0729 17:19:36.746719   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-900414 -v=7 --alsologtostderr: (55.480143222s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-900414 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp testdata/cp-test.txt ha-900414:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp ha-900414:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3654370545/001/cp-test_ha-900414.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp ha-900414:/home/docker/cp-test.txt ha-900414-m02:/home/docker/cp-test_ha-900414_ha-900414-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m02 "sudo cat /home/docker/cp-test_ha-900414_ha-900414-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp ha-900414:/home/docker/cp-test.txt ha-900414-m03:/home/docker/cp-test_ha-900414_ha-900414-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m03 "sudo cat /home/docker/cp-test_ha-900414_ha-900414-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp ha-900414:/home/docker/cp-test.txt ha-900414-m04:/home/docker/cp-test_ha-900414_ha-900414-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m04 "sudo cat /home/docker/cp-test_ha-900414_ha-900414-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp testdata/cp-test.txt ha-900414-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp ha-900414-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3654370545/001/cp-test_ha-900414-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp ha-900414-m02:/home/docker/cp-test.txt ha-900414:/home/docker/cp-test_ha-900414-m02_ha-900414.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414 "sudo cat /home/docker/cp-test_ha-900414-m02_ha-900414.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp ha-900414-m02:/home/docker/cp-test.txt ha-900414-m03:/home/docker/cp-test_ha-900414-m02_ha-900414-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m03 "sudo cat /home/docker/cp-test_ha-900414-m02_ha-900414-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp ha-900414-m02:/home/docker/cp-test.txt ha-900414-m04:/home/docker/cp-test_ha-900414-m02_ha-900414-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m04 "sudo cat /home/docker/cp-test_ha-900414-m02_ha-900414-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp testdata/cp-test.txt ha-900414-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp ha-900414-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3654370545/001/cp-test_ha-900414-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp ha-900414-m03:/home/docker/cp-test.txt ha-900414:/home/docker/cp-test_ha-900414-m03_ha-900414.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414 "sudo cat /home/docker/cp-test_ha-900414-m03_ha-900414.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp ha-900414-m03:/home/docker/cp-test.txt ha-900414-m02:/home/docker/cp-test_ha-900414-m03_ha-900414-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m02 "sudo cat /home/docker/cp-test_ha-900414-m03_ha-900414-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp ha-900414-m03:/home/docker/cp-test.txt ha-900414-m04:/home/docker/cp-test_ha-900414-m03_ha-900414-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m04 "sudo cat /home/docker/cp-test_ha-900414-m03_ha-900414-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp testdata/cp-test.txt ha-900414-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3654370545/001/cp-test_ha-900414-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt ha-900414:/home/docker/cp-test_ha-900414-m04_ha-900414.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414 "sudo cat /home/docker/cp-test_ha-900414-m04_ha-900414.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt ha-900414-m02:/home/docker/cp-test_ha-900414-m04_ha-900414-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m02 "sudo cat /home/docker/cp-test_ha-900414-m04_ha-900414-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 cp ha-900414-m04:/home/docker/cp-test.txt ha-900414-m03:/home/docker/cp-test_ha-900414-m04_ha-900414-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 ssh -n ha-900414-m03 "sudo cat /home/docker/cp-test_ha-900414-m04_ha-900414-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.479354609s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-900414 node delete m03 -v=7 --alsologtostderr: (16.41806888s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (353.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-900414 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 17:33:15.948974   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:33:29.676558   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:36:52.902650   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:38:29.676647   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-900414 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m53.05499941s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (353.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-900414 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-900414 --control-plane -v=7 --alsologtostderr: (1m13.869053146s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-900414 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (95.05s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-166533 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0729 17:41:32.723455   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-166533 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m35.046906799s)
--- PASS: TestJSONOutput/start/Command (95.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-166533 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-166533 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.31s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-166533 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-166533 --output=json --user=testUser: (7.310794287s)
--- PASS: TestJSONOutput/stop/Command (7.31s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-605619 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-605619 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (56.231861ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"847c8cfa-efff-454f-813b-428444661acf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-605619] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"26d41364-9bdb-4c3b-a292-c7282c33dae0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19345"}}
	{"specversion":"1.0","id":"f84c3086-4329-4d29-b9a1-c1f7a222e49c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e1e91c40-c728-403a-aa89-07957739eeea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig"}}
	{"specversion":"1.0","id":"c0086e6d-28d7-4135-bdf1-554f09393edb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube"}}
	{"specversion":"1.0","id":"2079d611-ff61-42d4-810e-dd8bf877d9f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2eed4790-adf6-46ff-a964-b7263d2b1ecb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"21e81a77-8da2-49a8-aaeb-b6081dcfd636","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-605619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-605619
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (90.6s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-034064 --driver=kvm2  --container-runtime=crio
E0729 17:41:52.904298   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-034064 --driver=kvm2  --container-runtime=crio: (43.421748521s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-036614 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-036614 --driver=kvm2  --container-runtime=crio: (44.391744056s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-034064
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-036614
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-036614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-036614
helpers_test.go:175: Cleaning up "first-034064" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-034064
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-034064: (1.012621479s)
--- PASS: TestMinikubeProfile (90.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-376343 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0729 17:43:29.676192   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-376343 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.941673736s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-376343 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-376343 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-388568 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-388568 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.120015773s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-388568 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-388568 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-376343 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-388568 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-388568 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-388568
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-388568: (1.26718175s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.74s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-388568
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-388568: (23.741354516s)
--- PASS: TestMountStart/serial/RestartStopped (24.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-388568 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-388568 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (119.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-602258 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-602258 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m58.633334928s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (119.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602258 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602258 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-602258 -- rollout status deployment/busybox: (1.650934516s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602258 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602258 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602258 -- exec busybox-fc5497c4f-kqrzf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602258 -- exec busybox-fc5497c4f-mmr6c -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602258 -- exec busybox-fc5497c4f-kqrzf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602258 -- exec busybox-fc5497c4f-mmr6c -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602258 -- exec busybox-fc5497c4f-kqrzf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602258 -- exec busybox-fc5497c4f-mmr6c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.05s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602258 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602258 -- exec busybox-fc5497c4f-kqrzf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602258 -- exec busybox-fc5497c4f-kqrzf -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602258 -- exec busybox-fc5497c4f-mmr6c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602258 -- exec busybox-fc5497c4f-mmr6c -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-602258 -v 3 --alsologtostderr
E0729 17:46:52.902739   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-602258 -v 3 --alsologtostderr: (49.499472978s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.05s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-602258 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 cp testdata/cp-test.txt multinode-602258:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 cp multinode-602258:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile669002766/001/cp-test_multinode-602258.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 cp multinode-602258:/home/docker/cp-test.txt multinode-602258-m02:/home/docker/cp-test_multinode-602258_multinode-602258-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258-m02 "sudo cat /home/docker/cp-test_multinode-602258_multinode-602258-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 cp multinode-602258:/home/docker/cp-test.txt multinode-602258-m03:/home/docker/cp-test_multinode-602258_multinode-602258-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258-m03 "sudo cat /home/docker/cp-test_multinode-602258_multinode-602258-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 cp testdata/cp-test.txt multinode-602258-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 cp multinode-602258-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile669002766/001/cp-test_multinode-602258-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 cp multinode-602258-m02:/home/docker/cp-test.txt multinode-602258:/home/docker/cp-test_multinode-602258-m02_multinode-602258.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258 "sudo cat /home/docker/cp-test_multinode-602258-m02_multinode-602258.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 cp multinode-602258-m02:/home/docker/cp-test.txt multinode-602258-m03:/home/docker/cp-test_multinode-602258-m02_multinode-602258-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258-m03 "sudo cat /home/docker/cp-test_multinode-602258-m02_multinode-602258-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 cp testdata/cp-test.txt multinode-602258-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 cp multinode-602258-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile669002766/001/cp-test_multinode-602258-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 cp multinode-602258-m03:/home/docker/cp-test.txt multinode-602258:/home/docker/cp-test_multinode-602258-m03_multinode-602258.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258 "sudo cat /home/docker/cp-test_multinode-602258-m03_multinode-602258.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 cp multinode-602258-m03:/home/docker/cp-test.txt multinode-602258-m02:/home/docker/cp-test_multinode-602258-m03_multinode-602258-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 ssh -n multinode-602258-m02 "sudo cat /home/docker/cp-test_multinode-602258-m03_multinode-602258-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-602258 node stop m03: (1.463472721s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-602258 status: exit status 7 (410.315533ms)

                                                
                                                
-- stdout --
	multinode-602258
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-602258-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-602258-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-602258 status --alsologtostderr: exit status 7 (414.395239ms)

                                                
                                                
-- stdout --
	multinode-602258
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-602258-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-602258-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:47:41.049680   47311 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:47:41.049809   47311 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:47:41.049820   47311 out.go:304] Setting ErrFile to fd 2...
	I0729 17:47:41.049825   47311 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:47:41.049982   47311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 17:47:41.050157   47311 out.go:298] Setting JSON to false
	I0729 17:47:41.050185   47311 mustload.go:65] Loading cluster: multinode-602258
	I0729 17:47:41.050231   47311 notify.go:220] Checking for updates...
	I0729 17:47:41.050695   47311 config.go:182] Loaded profile config "multinode-602258": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:47:41.050716   47311 status.go:255] checking status of multinode-602258 ...
	I0729 17:47:41.051208   47311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:47:41.051260   47311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:47:41.068998   47311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33317
	I0729 17:47:41.069417   47311 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:47:41.069926   47311 main.go:141] libmachine: Using API Version  1
	I0729 17:47:41.069949   47311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:47:41.070307   47311 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:47:41.070531   47311 main.go:141] libmachine: (multinode-602258) Calling .GetState
	I0729 17:47:41.072014   47311 status.go:330] multinode-602258 host status = "Running" (err=<nil>)
	I0729 17:47:41.072028   47311 host.go:66] Checking if "multinode-602258" exists ...
	I0729 17:47:41.072343   47311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:47:41.072391   47311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:47:41.087087   47311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40853
	I0729 17:47:41.087494   47311 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:47:41.087897   47311 main.go:141] libmachine: Using API Version  1
	I0729 17:47:41.087925   47311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:47:41.088237   47311 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:47:41.088385   47311 main.go:141] libmachine: (multinode-602258) Calling .GetIP
	I0729 17:47:41.090973   47311 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:47:41.091366   47311 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:47:41.091405   47311 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:47:41.091502   47311 host.go:66] Checking if "multinode-602258" exists ...
	I0729 17:47:41.091865   47311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:47:41.091906   47311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:47:41.107305   47311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45009
	I0729 17:47:41.107712   47311 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:47:41.108225   47311 main.go:141] libmachine: Using API Version  1
	I0729 17:47:41.108244   47311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:47:41.108531   47311 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:47:41.108718   47311 main.go:141] libmachine: (multinode-602258) Calling .DriverName
	I0729 17:47:41.108926   47311 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:47:41.108958   47311 main.go:141] libmachine: (multinode-602258) Calling .GetSSHHostname
	I0729 17:47:41.111884   47311 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:47:41.112263   47311 main.go:141] libmachine: (multinode-602258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:91:9c", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:44:53 +0000 UTC Type:0 Mac:52:54:00:af:91:9c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-602258 Clientid:01:52:54:00:af:91:9c}
	I0729 17:47:41.112295   47311 main.go:141] libmachine: (multinode-602258) DBG | domain multinode-602258 has defined IP address 192.168.39.218 and MAC address 52:54:00:af:91:9c in network mk-multinode-602258
	I0729 17:47:41.112453   47311 main.go:141] libmachine: (multinode-602258) Calling .GetSSHPort
	I0729 17:47:41.112594   47311 main.go:141] libmachine: (multinode-602258) Calling .GetSSHKeyPath
	I0729 17:47:41.112711   47311 main.go:141] libmachine: (multinode-602258) Calling .GetSSHUsername
	I0729 17:47:41.112819   47311 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/multinode-602258/id_rsa Username:docker}
	I0729 17:47:41.194317   47311 ssh_runner.go:195] Run: systemctl --version
	I0729 17:47:41.200978   47311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:47:41.215380   47311 kubeconfig.go:125] found "multinode-602258" server: "https://192.168.39.218:8443"
	I0729 17:47:41.215407   47311 api_server.go:166] Checking apiserver status ...
	I0729 17:47:41.215450   47311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:47:41.229482   47311 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1159/cgroup
	W0729 17:47:41.238760   47311 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1159/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:47:41.238806   47311 ssh_runner.go:195] Run: ls
	I0729 17:47:41.243038   47311 api_server.go:253] Checking apiserver healthz at https://192.168.39.218:8443/healthz ...
	I0729 17:47:41.247167   47311 api_server.go:279] https://192.168.39.218:8443/healthz returned 200:
	ok
	I0729 17:47:41.247190   47311 status.go:422] multinode-602258 apiserver status = Running (err=<nil>)
	I0729 17:47:41.247212   47311 status.go:257] multinode-602258 status: &{Name:multinode-602258 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:47:41.247233   47311 status.go:255] checking status of multinode-602258-m02 ...
	I0729 17:47:41.247577   47311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:47:41.247612   47311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:47:41.263069   47311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39269
	I0729 17:47:41.263429   47311 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:47:41.263860   47311 main.go:141] libmachine: Using API Version  1
	I0729 17:47:41.263881   47311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:47:41.264193   47311 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:47:41.264388   47311 main.go:141] libmachine: (multinode-602258-m02) Calling .GetState
	I0729 17:47:41.265771   47311 status.go:330] multinode-602258-m02 host status = "Running" (err=<nil>)
	I0729 17:47:41.265785   47311 host.go:66] Checking if "multinode-602258-m02" exists ...
	I0729 17:47:41.266089   47311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:47:41.266126   47311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:47:41.280857   47311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43477
	I0729 17:47:41.281284   47311 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:47:41.281712   47311 main.go:141] libmachine: Using API Version  1
	I0729 17:47:41.281728   47311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:47:41.282074   47311 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:47:41.282263   47311 main.go:141] libmachine: (multinode-602258-m02) Calling .GetIP
	I0729 17:47:41.284955   47311 main.go:141] libmachine: (multinode-602258-m02) DBG | domain multinode-602258-m02 has defined MAC address 52:54:00:a3:91:2d in network mk-multinode-602258
	I0729 17:47:41.285344   47311 main.go:141] libmachine: (multinode-602258-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:91:2d", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:46:01 +0000 UTC Type:0 Mac:52:54:00:a3:91:2d Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-602258-m02 Clientid:01:52:54:00:a3:91:2d}
	I0729 17:47:41.285368   47311 main.go:141] libmachine: (multinode-602258-m02) DBG | domain multinode-602258-m02 has defined IP address 192.168.39.107 and MAC address 52:54:00:a3:91:2d in network mk-multinode-602258
	I0729 17:47:41.285508   47311 host.go:66] Checking if "multinode-602258-m02" exists ...
	I0729 17:47:41.285790   47311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:47:41.285821   47311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:47:41.300811   47311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33713
	I0729 17:47:41.301193   47311 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:47:41.301580   47311 main.go:141] libmachine: Using API Version  1
	I0729 17:47:41.301603   47311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:47:41.301893   47311 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:47:41.302046   47311 main.go:141] libmachine: (multinode-602258-m02) Calling .DriverName
	I0729 17:47:41.302214   47311 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:47:41.302231   47311 main.go:141] libmachine: (multinode-602258-m02) Calling .GetSSHHostname
	I0729 17:47:41.304606   47311 main.go:141] libmachine: (multinode-602258-m02) DBG | domain multinode-602258-m02 has defined MAC address 52:54:00:a3:91:2d in network mk-multinode-602258
	I0729 17:47:41.304931   47311 main.go:141] libmachine: (multinode-602258-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:91:2d", ip: ""} in network mk-multinode-602258: {Iface:virbr1 ExpiryTime:2024-07-29 18:46:01 +0000 UTC Type:0 Mac:52:54:00:a3:91:2d Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-602258-m02 Clientid:01:52:54:00:a3:91:2d}
	I0729 17:47:41.304968   47311 main.go:141] libmachine: (multinode-602258-m02) DBG | domain multinode-602258-m02 has defined IP address 192.168.39.107 and MAC address 52:54:00:a3:91:2d in network mk-multinode-602258
	I0729 17:47:41.305081   47311 main.go:141] libmachine: (multinode-602258-m02) Calling .GetSSHPort
	I0729 17:47:41.305270   47311 main.go:141] libmachine: (multinode-602258-m02) Calling .GetSSHKeyPath
	I0729 17:47:41.305416   47311 main.go:141] libmachine: (multinode-602258-m02) Calling .GetSSHUsername
	I0729 17:47:41.305555   47311 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19345-11206/.minikube/machines/multinode-602258-m02/id_rsa Username:docker}
	I0729 17:47:41.389064   47311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:47:41.402527   47311 status.go:257] multinode-602258-m02 status: &{Name:multinode-602258-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:47:41.402555   47311 status.go:255] checking status of multinode-602258-m03 ...
	I0729 17:47:41.402917   47311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:47:41.402969   47311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:47:41.418566   47311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I0729 17:47:41.419005   47311 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:47:41.419465   47311 main.go:141] libmachine: Using API Version  1
	I0729 17:47:41.419485   47311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:47:41.419922   47311 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:47:41.420092   47311 main.go:141] libmachine: (multinode-602258-m03) Calling .GetState
	I0729 17:47:41.421524   47311 status.go:330] multinode-602258-m03 host status = "Stopped" (err=<nil>)
	I0729 17:47:41.421538   47311 status.go:343] host is not running, skipping remaining checks
	I0729 17:47:41.421545   47311 status.go:257] multinode-602258-m03 status: &{Name:multinode-602258-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-602258 node start m03 -v=7 --alsologtostderr: (37.280051213s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-602258 node delete m03: (1.758257602s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (180.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-602258 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 17:56:52.904584   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 17:58:12.723726   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 17:58:29.676254   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-602258 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m0.317627328s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602258 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (180.82s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-602258
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-602258-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-602258-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (58.98518ms)

                                                
                                                
-- stdout --
	* [multinode-602258-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19345
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-602258-m02' is duplicated with machine name 'multinode-602258-m02' in profile 'multinode-602258'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-602258-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-602258-m03 --driver=kvm2  --container-runtime=crio: (44.579363008s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-602258
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-602258: exit status 80 (207.209831ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-602258 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-602258-m03 already exists in multinode-602258-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-602258-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.67s)

                                                
                                    
x
+
TestScheduledStopUnix (116.03s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-505413 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-505413 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.488120821s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-505413 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-505413 -n scheduled-stop-505413
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-505413 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-505413 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-505413 -n scheduled-stop-505413
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-505413
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-505413 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-505413
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-505413: exit status 7 (63.947832ms)

                                                
                                                
-- stdout --
	scheduled-stop-505413
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-505413 -n scheduled-stop-505413
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-505413 -n scheduled-stop-505413: exit status 7 (63.101877ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-505413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-505413
--- PASS: TestScheduledStopUnix (116.03s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (177.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1313296281 start -p running-upgrade-945114 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0729 18:06:35.950511   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 18:06:52.902249   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1313296281 start -p running-upgrade-945114 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m51.669238792s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-945114 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-945114 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.236439672s)
helpers_test.go:175: Cleaning up "running-upgrade-945114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-945114
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-945114: (1.285078507s)
--- PASS: TestRunningBinaryUpgrade (177.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (146.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1589835470 start -p stopped-upgrade-353713 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1589835470 start -p stopped-upgrade-353713 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m39.253926016s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1589835470 -p stopped-upgrade-353713 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1589835470 -p stopped-upgrade-353713 stop: (2.133989367s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-353713 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-353713 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.654085728s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (146.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-729010 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-729010 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (100.721514ms)

                                                
                                                
-- stdout --
	* [false-729010] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19345
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:06:23.383606   54970 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:06:23.383739   54970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:06:23.383755   54970 out.go:304] Setting ErrFile to fd 2...
	I0729 18:06:23.383770   54970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:06:23.384281   54970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19345-11206/.minikube/bin
	I0729 18:06:23.384901   54970 out.go:298] Setting JSON to false
	I0729 18:06:23.385753   54970 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6535,"bootTime":1722269848,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:06:23.385809   54970 start.go:139] virtualization: kvm guest
	I0729 18:06:23.387825   54970 out.go:177] * [false-729010] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:06:23.389114   54970 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 18:06:23.389129   54970 notify.go:220] Checking for updates...
	I0729 18:06:23.391561   54970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:06:23.392693   54970 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	I0729 18:06:23.393869   54970 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	I0729 18:06:23.394985   54970 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:06:23.396093   54970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:06:23.397535   54970 config.go:182] Loaded profile config "kubernetes-upgrade-372591": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:06:23.397630   54970 config.go:182] Loaded profile config "offline-crio-345856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:06:23.397720   54970 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:06:23.434539   54970 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 18:06:23.435701   54970 start.go:297] selected driver: kvm2
	I0729 18:06:23.435716   54970 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:06:23.435730   54970 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:06:23.437782   54970 out.go:177] 
	W0729 18:06:23.438950   54970 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0729 18:06:23.440091   54970 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-729010 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-729010

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-729010

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-729010

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-729010

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-729010

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-729010

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-729010

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-729010

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-729010

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-729010

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-729010

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-729010" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-729010" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-729010

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729010"

                                                
                                                
----------------------- debugLogs end: false-729010 [took: 2.843047739s] --------------------------------
helpers_test.go:175: Cleaning up "false-729010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-729010
--- PASS: TestNetworkPlugins/group/false (3.08s)

                                                
                                    
x
+
TestPause/serial/Start (59.16s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-151120 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0729 18:08:29.676732   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-151120 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (59.15964933s)
--- PASS: TestPause/serial/Start (59.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-353713
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444361 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-444361 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (57.772822ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-444361] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19345
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19345-11206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19345-11206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (52.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444361 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-444361 --driver=kvm2  --container-runtime=crio: (51.768427123s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-444361 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (52.08s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37.23s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-151120 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-151120 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (37.199671378s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (37.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444361 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-444361 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.581324359s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-444361 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-444361 status -o json: exit status 2 (239.212607ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-444361","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-444361
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.65s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-151120 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-151120 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-151120 --output=json --layout=cluster: exit status 2 (243.639208ms)

                                                
                                                
-- stdout --
	{"Name":"pause-151120","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-151120","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-151120 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.93s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-151120 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.93s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.06s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-151120 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-151120 --alsologtostderr -v=5: (1.061721297s)
--- PASS: TestPause/serial/DeletePaused (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (48.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444361 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-444361 --no-kubernetes --driver=kvm2  --container-runtime=crio: (48.568483484s)
--- PASS: TestNoKubernetes/serial/Start (48.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-444361 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-444361 "sudo systemctl is-active --quiet service kubelet": exit status 1 (196.875948ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-444361
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-444361: (1.270878049s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (88.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444361 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-444361 --driver=kvm2  --container-runtime=crio: (1m28.660523601s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (88.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-729010 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-729010 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m11.223843462s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-444361 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-444361 "sudo systemctl is-active --quiet service kubelet": exit status 1 (191.515272ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (105.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-729010 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-729010 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m45.063139337s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (105.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (105.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-729010 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0729 18:13:29.676907   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-729010 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m45.971908923s)
--- PASS: TestNetworkPlugins/group/calico/Start (105.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-729010 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-729010 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-ljcqr" [c281fa0e-4709-4e1d-af8d-e93380b0db48] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-ljcqr" [c281fa0e-4709-4e1d-af8d-e93380b0db48] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003938354s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (33.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-729010 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-729010 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.165768767s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-729010 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-729010 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151242468s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-729010 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (33.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-75cmh" [9bad02a6-7407-4de9-96c6-914bb434d4eb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006711777s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-729010 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-729010 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-729010 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-729010 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-wpqrj" [319aa7d4-ffbc-4b64-a318-2a66442cb358] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-wpqrj" [319aa7d4-ffbc-4b64-a318-2a66442cb358] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004677916s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-729010 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-729010 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-729010 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-729010 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-729010 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m21.996720251s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (82.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (124.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-729010 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-729010 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m4.032572842s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (124.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9fc64" [d47b41ad-8b0b-47b9-bab5-6886e9ac7bb9] Running
E0729 18:14:52.724097   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005409916s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-729010 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-729010 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-58w9h" [ca2353c7-12bd-400e-8ecd-159b726d445d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-58w9h" [ca2353c7-12bd-400e-8ecd-159b726d445d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00455775s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-729010 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-729010 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-729010 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (88.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-729010 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-729010 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m28.050359797s)
--- PASS: TestNetworkPlugins/group/flannel/Start (88.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (139.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-729010 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-729010 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m19.749144107s)
--- PASS: TestNetworkPlugins/group/bridge/Start (139.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-729010 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-729010 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-wlsrs" [4b65750c-3ce4-4338-8c52-a79e8f869538] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-wlsrs" [4b65750c-3ce4-4338-8c52-a79e8f869538] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005003766s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-729010 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-729010 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-729010 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-729010 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-729010 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-scjg2" [50e97b9c-510e-4b9e-9d66-9e0ccab29409] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-scjg2" [50e97b9c-510e-4b9e-9d66-9e0ccab29409] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003957316s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kn7gx" [4f02d354-e6c0-4d1a-b20f-2af55f6c81da] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004130232s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-729010 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-729010 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pb4wm" [c7b4bc77-b2fe-4498-8aa8-b886c07a3eec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-pb4wm" [c7b4bc77-b2fe-4498-8aa8-b886c07a3eec] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004351696s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-729010 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-729010 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-729010 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-729010 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-729010 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-729010 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (100.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-888056 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-888056 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m40.936120334s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (100.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (121.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-409322 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-409322 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (2m1.131678418s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (121.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-729010 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-729010 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-68rj8" [7c7aecb0-e72e-473d-b1ff-14a3ba1a9fd3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-68rj8" [7c7aecb0-e72e-473d-b1ff-14a3ba1a9fd3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.003873403s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-729010 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-729010 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-729010 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (98.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-502055 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0729 18:18:29.676399   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/addons-433102/client.crt: no such file or directory
E0729 18:18:34.527808   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:18:34.533097   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:18:34.543509   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:18:34.563765   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:18:34.604139   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:18:34.684936   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:18:34.845333   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:18:35.166159   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:18:35.806299   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:18:37.087065   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:18:39.648255   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:18:44.768624   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
E0729 18:18:55.009293   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/auto-729010/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-502055 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m38.058758893s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (98.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-888056 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bd6f14e0-5b6e-4107-9104-01266ba21535] Pending
helpers_test.go:344: "busybox" [bd6f14e0-5b6e-4107-9104-01266ba21535] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bd6f14e0-5b6e-4107-9104-01266ba21535] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.005458005s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-888056 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-888056 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-888056 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-409322 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d6590979-5c79-42f7-b379-172e797a15cd] Pending
helpers_test.go:344: "busybox" [d6590979-5c79-42f7-b379-172e797a15cd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d6590979-5c79-42f7-b379-172e797a15cd] Running
E0729 18:19:36.009855   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/kindnet-729010/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004262628s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-409322 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-409322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-409322 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-502055 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [16c2fbe7-3235-4c35-b89d-d36c39f5e8e3] Pending
helpers_test.go:344: "busybox" [16c2fbe7-3235-4c35-b89d-d36c39f5e8e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0729 18:20:00.552886   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/calico-729010/client.crt: no such file or directory
helpers_test.go:344: "busybox" [16c2fbe7-3235-4c35-b89d-d36c39f5e8e3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00427511s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-502055 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-502055 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-502055 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (681.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-888056 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0729 18:21:52.902269   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 18:21:54.099507   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:21:54.104771   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:21:54.115012   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:21:54.135279   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:21:54.175608   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:21:54.255962   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:21:54.288245   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
E0729 18:21:54.293485   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
E0729 18:21:54.303728   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
E0729 18:21:54.324086   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
E0729 18:21:54.364370   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
E0729 18:21:54.416559   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:21:54.444810   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
E0729 18:21:54.605868   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
E0729 18:21:54.737455   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:21:54.926820   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
E0729 18:21:55.378470   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:21:55.567006   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
E0729 18:21:56.658772   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:21:56.847642   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-888056 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (11m20.889470178s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-888056 -n no-preload-888056
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (681.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (600.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-409322 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0729 18:22:14.581760   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:22:14.770142   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
E0729 18:22:19.144131   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/custom-flannel-729010/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-409322 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (10m0.533812206s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-409322 -n embed-certs-409322
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (600.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (540.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-502055 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-502055 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (9m0.056655668s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-502055 -n default-k8s-diff-port-502055
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (540.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-386663 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-386663 --alsologtostderr -v=3: (2.28363407s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386663 -n old-k8s-version-386663
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386663 -n old-k8s-version-386663: exit status 7 (64.170266ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-386663 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-903256 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0729 18:46:52.902215   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/functional-419822/client.crt: no such file or directory
E0729 18:46:54.099184   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/enable-default-cni-729010/client.crt: no such file or directory
E0729 18:46:54.288643   18393 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19345-11206/.minikube/profiles/flannel-729010/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-903256 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (47.266404365s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-903256 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-903256 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.050510256s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-903256 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-903256 --alsologtostderr -v=3: (10.422693361s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-903256 -n newest-cni-903256
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-903256 -n newest-cni-903256: exit status 7 (79.909123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-903256 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (72.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-903256 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-903256 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m11.803470006s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-903256 -n newest-cni-903256
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (72.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-903256 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-903256 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-903256 -n newest-cni-903256
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-903256 -n newest-cni-903256: exit status 2 (225.389057ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-903256 -n newest-cni-903256
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-903256 -n newest-cni-903256: exit status 2 (226.603829ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-903256 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-903256 -n newest-cni-903256
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-903256 -n newest-cni-903256
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.31s)

                                                
                                    

Test skip (40/326)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
147 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
148 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
149 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
152 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
153 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
262 TestNetworkPlugins/group/kubenet 3
271 TestNetworkPlugins/group/cilium 2.94
277 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-729010 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-729010

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-729010

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-729010

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-729010

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-729010

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-729010

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-729010

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-729010

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-729010

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-729010

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-729010

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-729010" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-729010" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-729010

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729010"

                                                
                                                
----------------------- debugLogs end: kubenet-729010 [took: 2.86411462s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-729010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-729010
--- SKIP: TestNetworkPlugins/group/kubenet (3.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-729010 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-729010

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-729010

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-729010

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-729010

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-729010

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-729010

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-729010

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-729010

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-729010

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-729010

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-729010

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-729010" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-729010

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-729010

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-729010

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-729010

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-729010" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-729010" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-729010

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-729010" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729010"

                                                
                                                
----------------------- debugLogs end: cilium-729010 [took: 2.813047951s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-729010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-729010
--- SKIP: TestNetworkPlugins/group/cilium (2.94s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-603863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-603863
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard